

# Managing storage on FSx for Windows File Server
Managing storage

Your file system's storage configuration includes the amount of provisioned storage capacity, the storage type, and if the storage type is solid state drive (SSD), the amount of SSD IOPS. You can configure these resources, along with the file system's throughput capacity, when creating a file system and after it's created, to achieve the desired performance for your workload. Learn how to manage your file system's storage and storage-related performance using the AWS Management Console, AWS CLI, and the Amazon FSx CLI for remote management on PowerShell by exploring the following topics.

**Topics**
+ [

## Optimizing storage costs
](#optimize-storage-costs)
+ [

## Managing storage capacity
](#managing-storage-capacity)
+ [

## Managing your file system's storage type
](#managing-storage-type)
+ [

## Managing SSD IOPS
](#managing-provisioned-ssd-iops)
+ [

## Reducing storage costs with Data Deduplication
](#using-data-dedup)
+ [

# Managing storage quotas
](managing-user-quotas.md)
+ [

# Increasing file system storage capacity
](increase-storage-capacity.md)
+ [

# Monitoring storage capacity increases
](monitoring-storage-capacity-increase.md)
+ [

# Increasing the storage capacity of an FSx for Windows File Server file system dynamically
](automate-storage-capacity-increase.md)
+ [

# Updating the storage type of a FSx for Windows file system
](updating-storage-type.md)
+ [

# Monitoring storage type updates
](monitoring-storage-type-updates.md)
+ [

# Updating a file system's SSD IOPS
](how-to-provision-ssd-iops.md)
+ [

# Monitoring provisioned SSD IOPS updates
](monitoring-provisioned-ssd-iops.md)
+ [

# Managing data deduplication
](managing-data-dedup.md)
+ [

# Troubleshooting data deduplication
](data-dedup-ts.md)

## Optimizing storage costs


You can optimize your storage costs using the storage configuration options available in FSx for Windows.

**Storage type options**—FSx for Windows File Server provides two storage types, hard disk drives (HDD) and solid state drives (SSD)—to enable you to optimize cost/performance to meet your workload needs. HDD storage is designed for a broad spectrum of workloads, including home directories, user and departmental shares, and content management systems. SSD storage is designed for the highest-performance and most latency-sensitive workloads, including databases, media processing workloads, and data analytics applications. For more information about storage types and file system performance, see [FSx for Windows File Server performancePerformance](performance.md).

**Data deduplication**—Large datasets often have redundant data, which increases data storage costs. For example, user file shares can have multiple copies of the same file, stored by multiple users. Software development shares can contain many binaries that remain unchanged from build to build. You can reduce your data storage costs by turning on *data deduplication* for your file system. When it's turned on, data deduplication automatically reduces or eliminates redundant data by storing duplicated portions of the dataset only once. For more information about data deduplication, and how to easily turn it on for your Amazon FSx file system, see [Reducing storage costs with Data Deduplication](#using-data-dedup).

## Managing storage capacity


You can increase your FSx for Windows file system's storage capacity as your storage requirements change. You can do so using the Amazon FSx console, the Amazon FSx API, or the AWS Command Line Interface (AWS CLI). Factors to consider when planning a storage capacity increase include knowing when you need to increase storage capacity, understanding how Amazon FSx processes storage capacity increases, and tracking the progress of a storage increase request. You can only increase a file system's storage capacity; you cannot decrease storage capacity. 

**Note**  
You can't increase storage capacity for file systems created before June 23, 2019 or file systems restored from a backup belonging to a file system that was created before June 23, 2019.

When you increase the storage capacity of your Amazon FSx file system, Amazon FSx adds a new, larger set of disks to your file system behind the scenes. Amazon FSx then runs a storage optimization process in the background to transparently migrate data from the old disks to the new disks. Storage optimization can take between a few hours and several days, depending on the storage type and other factors, with minimal noticeable impact on the workload performance. During this optimization, backup usage is temporarily higher, because both the old and new storage volumes are included in the file system-level backups. Both sets of storage volumes are included to ensure that Amazon FSx can successfully take and restore from backups even during storage scaling activity. The backup usage reverts to its previous baseline level after the old storage volumes are no longer included in the backup history. When the new storage capacity becomes available, you are billed only for the new storage capacity.

The following illustration shows the four main steps of the process that Amazon FSx uses when increasing a file system's storage capacity.

![\[Diagram showing the 4 steps of the storage scaling process.\]](http://docs.aws.amazon.com/fsx/latest/WindowsGuide/images/storage-scaling-flow.png)


You can track the progress of storage optimization, SSD storage capacity increases, or SSD IOPS updates at any time using the Amazon FSx console, CLI, or API. For more information, see [Monitoring storage capacity increases](monitoring-storage-capacity-increase.md).

### What to know about increasing a file system's storage capacity
What to know about increasing storage

 Here are a few important items to consider when increasing storage capacity: 
+ **Increase only** – You can only *increase* the amount of storage capacity for a file system; you can't decrease storage capacity.
+ **Minimum increase** – Each storage capacity increase must be a minimum of 10 percent of the file system's current storage capacity, up to the maximum allowed value of 65,536 GiB.
+ **Minimum throughput capacity** – To increase storage capacity, a file system must have a minimum throughput capacity of 16 MBps. This is because the storage optimization step is a throughput-intensive process.
+ **Time between increases** – You can't make further storage capacity increases on a file system until 6 hours after the last increase was requested, or until the storage optimization process has completed, whichever time is longer. Storage optimization can take from a few hours up to a few days to complete. To minimize the time it takes for storage optimization to complete, we recommend increasing your file system's throughput capacity before increasing storage capacity (the throughput capacity can be scaled back down after storage scaling completes), and increasing storage capacity when there is minimal traffic on the file system.

**Note**  
Certain file system events can consume disk I/O performance resources For example:  
The optimization phase of storage capacity scaling can generate increased disk throughput, and potentially cause performance warnings. For more information, see [Performance warnings and recommendations](monitoring-cloudwatch.md#performance-insights-FSxW).

### Knowing when to increase storage capacity


Increase your file system's storage capacity when it's running low on free storage capacity. Use the `FreeStorageCapacity` CloudWatch metric to monitor the amount of free storage available on the file system. You can create an Amazon CloudWatch alarm on this metric and get notified when it drops below a specific threshold. For more information, see [Monitoring with Amazon CloudWatch](monitoring-cloudwatch.md).

We recommend maintaining at least 20% of free storage capacity at all times on your file system. Using all of your storage capacity can negatively impact your performance and might introduce data inconsistencies. 

You can automatically increase your file system's storage capacity when the amount of free storage capacity falls below a defined threshold that you specify. Use the AWS‐developed custom CloudFormation template to deploy all of the components required to implement the automated solution. For more information, see [Increasing storage capacity dynamically](automate-storage-capacity-increase.md).

### Storage capacity increases and file system performance
Performance impact during a storage capacity increase

Most workloads experience minimal performance impact while Amazon FSx runs the storage optimization process in the background after the new storage capacity is available. However, file systems with HDD storage type and workloads involving large numbers of end users, high levels of I/O, or datasets that have large numbers of small files could temporarily experience reduction in the performance. For these cases, we recommend that you first increase your file system's throughput capacity before increasing storage capacity. For these types of workloads, we also recommend changing throughput capacity during idle periods when there is minimal load on your file system. This enables you to continue providing the same level of throughput to meet your application’s performance needs. For more information, see [Managing throughput capacity](managing-throughput-capacity.md).

## Managing your file system's storage type
Managing storage types

You can change your file system storage type from HDD to SSD using the AWS Management Console and AWS CLI. When you change the storage type to SSD, keep in mind that you can't update your file system configuration again until 6 hours after the last update was requested, or until the storage optimization process is complete—whichever time is longer. Storage optimization can take between a few hours and a few days to complete. To minimize this time, we recommend updating your storage type when there is minimal traffic on your file system. For more information, see [Updating the storage type of a FSx for Windows file system](updating-storage-type.md).

You can't change your file system storage type from SSD to HDD. If you want to change a file system's storage type from SSD to HDD, you will need to restore a backup of the file system to a new file system that you configure to use HDD storage. For more information, see [Restoring backups to new file system](using-backups.md#restoring-backups).

### About storage types


You can configure your FSx for Windows File Server file system to use either the solid state drive (SSD) or the magnetic hard disk drive (HDD) storage type.

**SSD storage** is appropriate for most production workloads that have high performance requirements and latency-sensitivity. Examples of these workloads include databases, data analytics, media processing, and business applications. We also recommend SSD for use cases involving large numbers of end users, high levels of I/O, or datasets that have large numbers of small files. Lastly, we recommend using SSD storage if you plan to enable shadow copies. You can configure and scale SSD IOPS for file systems with SSD storage, but not HDD storage.

**HDD storage** is designed for a broad range of workloads—including home directories, user and departmental file shares, and content management systems. HDD storage comes at a lower cost relative to SSD storage, but with higher latencies and lower levels of disk throughput and disk IOPS per unit of storage. It might be suitable for general-purpose user shares and home directories with low I/O requirements, large content management systems (CMS) where data is retrieved infrequently, or datasets with small numbers of large files.

For more information, see [Storage configuration & performance](performance.md#storage-capacity-and-performance). 

## Managing SSD IOPS


For file systems configured with SSD storage, the amount of SSD IOPS determines the amount of disk I/O available when your file system has to read data from and write data to disk, as opposed to data that is in cache. You can select and scale the amount of SSD IOPS independently of storage capacity. The maximum SSD IOPS that you can provision is dependent on the amount of storage capacity and throughput capacity you select for your file system. If you attempt to increase your SSD IOPS above the limit that's supported by your throughput capacity, you might need to increase your throughput capacity to get that level of SSD IOPS. For more information, see [FSx for Windows File Server performancePerformance](performance.md) and [Managing throughput capacity](managing-throughput-capacity.md).

 Here are a few important items to know about updating a file system's provisioned SSD IOPS:
+ Choosing an IOPS mode – there are two IOPS modes to choose from:
  + **Automatic** – choose this mode and Amazon FSx will automatically scale your SSD IOPS to maintain 3 SSD IOPS per GiB of storage capacity, up to 400,000 SSD IOPS per file system.
  + **User-provisioned** – choose this mode so that you can specify the number of SSD IOPS within the range of 96–400,000. Specify a number between 3–50 IOPS per GiB of storage capacity for all AWS Regions where Amazon FSx is available, or between 3–500 IOPS per GiB of storage capacity in US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Singapore). When you choose the user-provisiohed mode, and the amount of SSD IOPS you specify is not at least 3 IOPS per GiB, the request fails. For higher levels of provisioned SSD IOPS, you pay for the average IOPS above 3 IOPS per GiB per file system.
+ **Storage capacity updates** – If you increase your file system's storage capacity, and the amount requires by default an amount of SSD IOPS that is greater than your current user-provisioned SSD IOPS level, Amazon FSx automatically switches your file system to Automatic mode and your file system will have a minimum of 3 SSD IOPS per GiB of storage capacity.
+ **Throughput capacity updates** – If you increase your throughput capacity, and the maximum SSD IOPS supported by your new throughput capacity is higher than your user-provisioned SSD IOPS level, Amazon FSx automatically switches your file system to Automatic mode.
+ **Frequency of SSD IOPS increases** – You can't make further SSD IOPS increases, throughput capacity increases, or storage type updates on a file system until 6 hours after the last increase was requested, or until the storage optimization process has completed—whichever time is longer. Storage optimization can take from a few hours up to a few days to complete. To minimize the time it takes for storage optimization to complete, we recommend scaling SSD IOPS when there is minimal traffic on the file system.

**Note**  
Note that throughput capacity levels of 4,608 MBps and higher are supported only in the following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Singapore).

For more information about how update the amount of provisioned SSD IOPS for your FSx for Windows File Server file system, see [Updating a file system's SSD IOPS](how-to-provision-ssd-iops.md).

## Reducing storage costs with Data Deduplication
Data deduplication

Data Deduplication, often referred to as Dedup for short, helps storage administrators reduce costs that are associated with duplicated data. With FSx for Windows File Server, you can use Microsoft Data Deduplication to identify and eliminate redundant data. Large datasets often have redundant data, which increases the data storage costs. For example:
+ User file shares may have many copies of the same or similar files.
+ Software development shares can have many binaries that remain unchanged from build to build.

You can reduce your data storage costs by enabling data deduplication for your file system. *Data deduplication *reduces or eliminates redundant data by storing duplicated portions of the dataset only once. When you enable Data Deduplication, Data compression is enabled by default, compressing the data after deduplication for additional savings. Data Deduplication optimizes redundancies without compromising data fidelity or integrity. Data deduplication runs as a background process that continually and automatically scans and optimizes your file system, and it is transparent to your users and connected clients.

The storage savings that you can achieve with data deduplication depends on the nature of your dataset, including how much duplication exists across files. Typical savings average 50–60 percent for general-purpose file shares. Within shares, savings range from 30–50 percent for user documents to 70–80 percent for software development datasets. You can measure potential deduplication savings using the `Measure-FSxDedupFileMetadata` remote PowerShell command described below.

You can also customize data deduplication to meet your specific storage needs. For example, you can configure deduplication to run only on certain file types, or you can create a custom job schedule. Because deduplication jobs can consume file server resources, we recommend monitoring the status of your deduplication jobs using the `Get-FSxDedupStatus`.

For information about configuring data deduplication on your file system, see [Managing data deduplication](managing-data-dedup.md).

For information on resolving issues related to data deduplication, see [ Troubleshooting data deduplication Learn to identify and resolve issues that arise when using data deduplication on your FSx for Windows File Server file systems. data deduplicationinsufficient memorypowershell  Use the following information to help troubleshoot some common issues when configuring and using data deduplication.   Data deduplication is not working To see the current status of data deduplication, run the `Get-FSxDedupStatus` PowerShell command to view the completion status for the most recent deduplication jobs. If one or more jobs is failing, you may not see an increase in free storage capacity on your file system. The most common reason for deduplication jobs failing is insufficient memory.  Microsoft [recommends](https://docs.microsoft.com/en-us/windows-server/storage/data-deduplication/install-enable#faq) optimally having 1 GB of memory per 1 TB of logical data (or at a minimum 350 MB per 1 TB of logical data). Use the [Amazon FSx performance table](performance.md#performance-table) to determine the memory associated with your file system's throughput capacity and ensure the memory resources are sufficient for the size of your data. If it is not, you need to [increase the file system's throughput capacity](managing-throughput-capacity.md) to the level that meets the memory requirements of 1 GB per 1 TB of logical data. Deduplication jobs are configured with the Windows recommended default of 25% memory allocation, which means that for a file system with 32 GB of memory, 8 GB will be available for deduplication. The memory allocation is configurable (using the `Set-FSxDedupSchedule` command with parameter `–Memory`). Be aware that using a higher memory allocation for dedup may impact file system performance. You can modify the configuration of deduplication jobs to reduce the amount of memory required. For example, you can constrain the optimization to run on specific file types or folders, or set a minimum file size and age for optimization. We also recommend configuring deduplication jobs to run during idle periods when there is minimal load on your file system.  You may also see errors if deduplication jobs have insufficient time to complete. You may need to change the maximum duration of jobs, as described in [Modifying a data deduplication schedule](managing-data-dedup.md#set-dedup-sched). If deduplication jobs have been failing for a long period of time, and there have been changes to the data on the file system during this period, subsequent deduplication jobs may require more resources to complete successfully for the first time.   Deduplication values are unexpectedly set to 0 The values for `SavedSpace` and `OptimizedFilesSavingsRate` are unexpectedly 0 for a file system on which you have configured data deduplication. This can occur during the storage optimization process when you increase the file system's storage capacity. When you increase a file system's storage capacity, Amazon FSx cancels existing data deduplication jobs during the storage optimization process, which migrates data from the old disks to the new, larger disks. Amazon FSx resumes data deduplication on the file system once the storage optimization job completes. For more information about increasing storage capacity and storage optimization, see [Managing storage capacity](managing-storage-configuration.md#managing-storage-capacity).   Space is not freed up on file system after deleting files The expected behavior of data deduplication is that if the data that was deleted was something that dedup had saved space on, then the space is not actually freed up on your file system until the garbage collection job runs. A practice you may find helpful is to set the schedule to run the garbage collection job right after you delete a large number of files. After the garbage collection job finishes, you can set the garbage collection schedule back to its original settings. This ensures you can quickly see the space from your deletions immediately. Use the following procedure to set the garbage collection job to run in 5 minutes.  To verify that data deduplication is enabled, use the `Get-FSxDedupStatus` command. For more information on the command and its expected output, see [Viewing the amount of saved space](managing-data-dedup.md#get-dedup-status).   Use the following to set the schedule to run the garbage collection job 5 minutes from now. 

   ```
   $FiveMinutesFromNowUTC = ((get-date).AddMinutes(5)).ToUniversalTime()
   $DayOfWeek = $FiveMinutesFromNowUTC.DayOfWeek
   $Time = $FiveMinutesFromNowUTC.ToString("HH:mm")
   
   Invoke-Command -ComputerName ${RPS_ENDPOINT} -ConfigurationName FSxRemoteAdmin -ScriptBlock {   
       Set-FSxDedupSchedule -Name "WeeklyGarbageCollection" -Days $Using:DayOfWeek -Start $Using:Time -DurationHours 9
   }
   ```   After the garbage collection job has run and the space has been freed up, set the schedule back to its original settings.    ](data-dedup-ts.md#data-dedup-ts.title).

For more information about data deduplication, see the Microsoft [Understanding Data Deduplication](https://docs.microsoft.com/en-us/windows-server/storage/data-deduplication/understand) documentation.

**Warning**  
It is not recommended to run certain Robocopy commands with data deduplication because these commands can impact the data integrity of the Chunk Store. For more information, see the Microsoft [Data Deduplication interoperability](https://docs.microsoft.com/en-us/windows-server/storage/data-deduplication/interop) documentation.

### Best practices when using data deduplication
Best practices

Here are some best practices for using Data Deduplication: 
+  **Schedule Data Deduplication jobs to run when your file system is idle**: The default schedule includes a weekly `GarbageCollection` job at 2:45 UTC on Saturdays. It can take multiple hours to complete if you have a large amount of data churn on your file system. If this time isn't ideal for your workload, schedule this job to run at a time when you expect low traffic on your file system. 
+  **Configure sufficient throughput capacity for Data Deduplication to complete**: Higher throughput capacities provide higher levels of memory. Microsoft recommends having 1 GB of memory per 1 TB of logical data to run Data Deduplication. Use the [Amazon FSx performance table](performance.md#impact-throughput-cap-performance) to determine the memory that's associated with your file system's throughput capacity and ensure that the memory resources are sufficient for the size of your data. 
+  **Customize Data Deduplication settings to meet your specific storage needs and reduce performance requirements**: You can constrain the optimization to run on specific file types or folders, or set a minimum file size and age for optimization. To learn more, see [Reducing storage costs with Data Deduplication](#using-data-dedup). 

# Managing storage quotas


You can configure user storage quotas on your file systems to limit how much data storage that users can consume. After you set quotas, you can track quota status to monitor usage and see when users surpass their quotas. 

You can also enforce quotas by stopping users who reach their quotas from writing to the storage space. When you enforce quotas, a user that exceeds their quota receives an "insufficient disk space" error message.

You can set these thresholds for quota settings:
+ Warning – used to track whether a user or group is approaching their quota limit, relevant for tracking only.
+ Limit – the storage quota limit for a user or group. 

You can configure default quotas that are applied to new users who access a file system and quotas that apply to specific users or groups. You can also view a report of how much storage each user or group is consuming and whether they're surpassing their quotas. 

Storage consumption at a user level is tracked based on file ownership. Storage consumption is calculated using logical file size, not the actual physical storage space that files occupy. User storage quotas are tracked at the time when data is written to a file.

Updating quotas for multiple users requires either running the update command once for each user, or organizing the users into a group and updating the quota for that group.

You can manage user storage quotas on your file system using the Amazon FSx CLI for remote management on PowerShell. To learn how to use this CLI, see [Using the Amazon FSx CLI for PowerShell](administering-file-systems.md#remote-pwrshell). 

Following are commands that you can use to manage user storage quotas.


| User storage quotas command | Description | 
| --- | --- | 
|  **Enable-FSxUserQuotas**  |  Starts tracking or enforcing user storage quotas, or both.  | 
|  **Disable-FSxUserQuotas**  |  Stops tracking and enforcement for user storage quotas.   | 
| **Get-FSxUserQuotaSettings** | Retrieves the current user-storage quota settings for the file system. | 
| **Get-FSxUserQuotaEntries** | Retrieves the current user-storage quota entries for individual users and groups on the file system. | 
| **Set-FSxUserQuotas** | Set the user storage quota for an individual user or group. Quota values are specified in bytes. | 

The online help for each command provides a reference of all command options. To access this help, run the command with **-?**, for example **Enable-FSxUserQuotas -?**. 

# Increasing file system storage capacity
Increasing storage capacity

You can increase your FSx for Windows File Server file system's storage capacity as your storage requirements change. Use the Amazon FSx console, the AWS CLI, or the Amazon FSx API to increase a file system's storage capacity as described in the following procedures. For more information, see [Managing storage capacity](managing-storage-configuration.md#managing-storage-capacity).

## To increase storage capacity for a file system (console)


1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems** and choose the Windows file system that you want to increase storage capacity for.

1. For **Actions**, choose **Update storage**. Or, in the **Summary** panel, choose **Update** next to the file system's **Storage capacity**. 

   The **Update storage capacity** window appears.

1. For **Input type**, choose **Percentage** to enter the new storage capacity as a percentage change from the current value, or choose **Absolute** to enter the new value in GiB.

1. Enter the **Desired storage capacity**.
**Note**  
The desired capacity value must be at least 10 percent larger than the current value, up to the maximum value of 65,536 GiB.

1. Choose **Update** to initiate the storage capacity update.

1. You can monitor the update progress on the **File systems** detail page, in the **Updates** tab.

## To increase storage capacity for a file system (CLI)


To increase the storage capacity for an FSx for Windows File Server file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html). Set the following parameters:
+ `--file-system-id` to the ID of the file system you are updating.
+ `--storage-capacity` to a value that is at least 10 percent greater than the current value.

You can monitor the progress of the update by using the AWS CLI command [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html). Look for the `administrative-actions` in the output. 

For more information, see [AdministrativeAction](https://docs.aws.amazon.com/fsx/latest/APIReference/API_AdministrativeAction.html).

# Monitoring storage capacity increases
Monitoring storage increases

After increasing your file system's storage capacity, you can monitor the progress of the storage capacity increase using the Amazon FSx console, the API, or the AWS CLI as described in the following procedures.

## Monitoring increases in the console


In the **Updates** tab in the **File system details** window, you can view the 10 most recent updates for each update type.

For storage capacity updates, you can view the following information.

****Update type****  
Possible values are **Storage capacity**.

****Target value****  
The desired value to update the file system's storage capacity to.

****Status****  
The current status of the update. For storage capacity updates, the possible values are as follows:  
+ **Pending** – Amazon FSx has received the update request, but has not started processing it.
+ **In progress** – Amazon FSx is processing the update request.
+ **Updated optimizing** – Amazon FSx has increased the file system's storage capacity. The storage optimization process is now moving the file system data to the new larger disks.
+ **Completed** – The storage capacity increase completed successfully.
+ **Failed** – The storage capacity increase failed. Choose the question mark (**?**) to see details on why the storage update failed.

****Progress %****  
Displays the progress of the storage optimization process as percent complete.

****Request time****  
The time that Amazon FSx received the update action request.

## Monitoring increases with the AWS CLI and API


You can view and monitor file system storage capacity increase requests using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command and the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API action. The `AdministrativeActions` array lists the 10 most recent update actions for each administrative action type. When you increase a file system's storage capacity, two `AdministrativeActions` are generated: a `FILE_SYSTEM_UPDATE` and a `STORAGE_OPTIMIZATION` action. 

The following example shows an excerpt of the response of a **describe-file-systems** CLI command. The file system has a storage capacity of 300 GB, and there is a pending administrative action to increase the storage capacity to 1000 GB.

```
{
    "FileSystems": [
        {
            "OwnerId": "111122223333",
            .
            .
            .
            "StorageCapacity": 300,
            "AdministrativeActions": [
                {
                     "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
                     "RequestTime": 1581694764.757,
                     "Status": "PENDING",
                     "TargetFileSystemValues": {
                         "StorageCapacity": 1000
                     }
                },
                {
                    "AdministrativeActionType": "STORAGE_OPTIMIZATION",
                    "RequestTime": 1581694764.757,
                    "Status": "PENDING",
                }
            ]
```

Amazon FSx processes the `FILE_SYSTEM_UPDATE` action first, adding the new larger storage disks to the file system. When the new storage is available to the file system, the `FILE_SYSTEM_UPDATE` status changes to `UPDATED_OPTIMIZING`. The storage capacity shows the new larger value, and Amazon FSx begins processing the `STORAGE_OPTIMIZATION` administrative action. This is shown in the following excerpt of the response of a **describe-file-systems** CLI command. 

The `ProgressPercent` property displays the progress of the storage optimization process. After the storage optimization process completes successfully, the status of the `FILE_SYSTEM_UPDATE` action changes to `COMPLETED`, and the `STORAGE_OPTIMIZATION` action no longer appears.

```
{
    "FileSystems": [
        {
            "OwnerId": "111122223333",
            .
            .
            .
            "StorageCapacity": 1000,
            "AdministrativeActions": [
                {
                    "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
                    "RequestTime": 1581694764.757,
                    "Status": "UPDATED_OPTIMIZING",
                    "TargetFileSystemValues": {
                        "StorageCapacity": 1000
                }
                },
                {
                    "AdministrativeActionType": "STORAGE_OPTIMIZATION",
                    "RequestTime": 1581694764.757,
                    "Status": "IN_PROGRESS",
                    "ProgressPercent": 50,
                }
            ]
```



 If the storage capacity increase fails, the status of the `FILE_SYSTEM_UPDATE` action changes to `FAILED`. The `FailureDetails` property provides information about the failure, shown in the following example.

```
{
    "FileSystems": [ 
        { 
            "OwnerId": "111122223333",
            .
            .
            .
            "StorageCapacity": 300,
            "AdministrativeActions": [ 
                { 
                    "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
                    "FailureDetails": { 
                        "Message": "string"
                    },
                    "RequestTime": 1581694764.757,
                    "Status": "FAILED",
                    "TargetFileSystemValues": 
                        "StorageCapacity": 1000
                }
            ]
```

For information about troubleshooting failed actions, see [Storage or throughput capacity updates fail](admin-actions-ts.md).

# Increasing the storage capacity of an FSx for Windows File Server file system dynamically
Increasing storage capacity dynamically

As an alternative to manually increasing your FSx for Windows File Server file system's storage capacity as the amount of data stored increases, you can use a CloudFormation template to increase storage automatically. The solution presented in the this section dynamically increases a file system's storage capacity when the amount of free storage capacity falls below a defined threshold that you specify.

This AWS CloudFormation template automatically deploys all of the components that are required to define the free storage capacity threshold, the Amazon CloudWatch alarm based on this threshold, and the AWS Lambda function that increases the file system’s storage capacity.

The solution takes in the following parameters:
+ The file system ID
+ The free storage capacity threshold (numerical value)
+ Unit of measurement (percentage [default] or GiB)
+ The percentage by which to increase the storage capacity (%)
+ The email address for the SNS subscription
+ Adjust alarm threshold (Yes/No)

**Topics**
+ [

## Architecture overview
](#storage-inc-architecture)
+ [

## CloudFormation template
](#storage-capacity-CFN-template)
+ [

## Automated deployment with CloudFormation
](#fsx-dynamic-storage-increase-deployment)

## Architecture overview


Deploying this solution builds the following resources in the AWS Cloud.

![\[Architecture diagram of the solution to automatically increase the storage capacity of an FSx for Windows File Server file system.\]](http://docs.aws.amazon.com/fsx/latest/WindowsGuide/images/auto-storage-increase-architecture.png)


The diagram illustrates the following steps:

1. The CloudFormation template deploys a CloudWatch alarm, an AWS Lambda function, an Amazon Simple Notification Service (Amazon SNS) queue, and all required AWS Identity and Access Management (IAM) roles. The IAM role gives the Lambda function permission to invoke the Amazon FSx API operations.

1. CloudWatch triggers an alarm when the file system’s free storage capacity goes below the specified threshold, and sends a message to the Amazon SNS queue.

1. The solution then triggers the Lambda function that is subscribed to this Amazon SNS topic.

1. The Lambda function calculates the new file system storage capacity based on the specified percent increase value and sets the new file system storage capacity.

1. The Lambda function can optionally adjust the free storage capacity threshold so that it is equal to a specified percentage of the file system’s new storage capacity.

1. The original CloudWatch alarm state and results of the Lambda function operations are sent to the Amazon SNS queue.

To receive notifications about the actions that are performed as a response to the CloudWatch alarm, you must confirm the Amazon SNS topic subscription by following the link provided in the **Subscription Confirmation** email.

## CloudFormation template


This solution uses CloudFormation to automate deploying the components that are used to automatically increase the storage capacity of an FSx for Windows File Server file system. To use this solution, download the [IncreaseFSxSize](https://s3.amazonaws.com/solution-references/fsx/DynamicScaling/IncreaseFSxSize.yaml) CloudFormation template.

The template uses the **Parameters** described as follows. Review the template parameters and their default values, and modify them for the needs of your file system.



**FileSystemId**  
No default value. The ID of the file system for which you want to automatically increase the storage capacity.

**LowFreeDataStorageCapacityThreshold**  
No default value. Specifies the initial free storage capacity threshold at which to trigger an alarm and automatically increase the file system's storage capacity, specified in GiB or as a percentage (%) of the file system's current storage capacity. When expressed as a percentage, the CloudFormation template re-calculates to GiB to match the CloudWatch alarm settings.

**LowFreeDataStorageCapacityThresholdUnit**  
Default is **%**. Specifies the units for the `LowFreeDataStorageCapacityThreshold`, either in GiB or as a percentage of the current storage capacity. 

**AlarmModificationNotification**  
Default is **Yes**. If set to Yes, the initial `LowFreeDataStorageCapacityThreshold`, is increased proportionally to the value of `PercentIncrease` for subsequent alarm thresholds.  
For example, when `PercentIncrease` is set to 20, and AlarmModificationNotification is set to Yes, the available free space threshold (`LowFreeDataStorageCapacityThreshold`) specified in GiB is increased by 20% for subsequent storage capacity increase events.

**EmailAddress**  
No default value. Specifies the email address to use for the SNS subscription and receives storage capacity threshold alerts.

**PercentIncrease**  
No default value. Specifies the amount by which to increase the storage capacity, expressed as a percentage of the current storage capacity.

## Automated deployment with CloudFormation


The following procedure configures and deploys an CloudFormation stack to automatically increase the storage capacity of an FSx for Windows File Server file system. It takes about 5 minutes to deploy. 

**Note**  
Implementing this solution incurs billing for the associated AWS services. For more information, see the pricing details pages for those services.

Before you start, you must have the ID of the Amazon FSx file system running in an Amazon Virtual Private Cloud (Amazon VPC) in your AWS account. For more information about creating Amazon FSx resources, see [Getting started with Amazon FSx for Windows File Server](getting-started.md).

**To launch the automatic storage capacity increase solution stack**

1. Download the [IncreaseFSxSize](https://s3.amazonaws.com/solution-references/fsx/DynamicScaling/IncreaseFSxSize.yaml) CloudFormation template. For more information about creating a CloudFormation stack, see [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the *AWS CloudFormation User Guide*.
**Note**  
Amazon FSx is currently only available in specific AWS Regions. You must launch this solution in an AWS Region where Amazon FSx is available. For more information, see [Amazon FSx endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/fsxn.html) in the *AWS General Reference*.

1. In **Specify stack details**, enter the values for your automatic storage capacity increase solution.  
![\[Screenshot showing the values entered for the Specify stack details page for the CloudFormation template.\]](http://docs.aws.amazon.com/fsx/latest/WindowsGuide/images/dynamic-storage-capacity-increase-cfn-stack.png)

1. Enter a **Stack name**.

1. For **Parameters**, review the parameters for the template and modify them for the needs of your file system. Then choose **Next**.

1. Enter any **Options** settings that you want for your custom solution, and then choose **Next**.

1. For **Review**, review and confirm the solution settings. You must select the check box acknowledging that the template creates IAM resources.

1. Choose **Create** to deploy the stack.

You can view the status of the stack in the CloudFormation console in the **Status** column. You should see a status of **CREATE\$1COMPLETE** in about 5 minutes.

### Updating the stack


After the stack is created, you can update it by using the same template and providing new values for the parameters. For more information, see [Updating stacks directly](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-direct.html) in the *AWS CloudFormation User Guide*.

# Updating the storage type of a FSx for Windows file system
Updating storage type

You can change the storage type of a file system that uses HDD storage to use SSD storage. You can use the the Amazon FSx console, the AWS CLI, or the Amazon FSx API to change a file system's storage type, as shown in the following procedures. For more information, see [Managing your file system's storage type](managing-storage-configuration.md#managing-storage-type).

## To update a file system's storage type (console)


1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems** and choose the Windows file system that you want to update the storage type for.

1. Under **Actions**, choose **Update storage type**. Or, in the **Summary** panel, select the **Update** button next to **HDD**. The **Update storage type** window appears.

1. For **Desired storage type**, choose **SSD**. Choose **Update** to initiate the storage type update.

   You can [monitor the progress](monitoring-storage-type-updates.md) of the storage type update using the console and the CLI.

## To update a file system's storage type (CLI)


To update storage type for an FSx for Windows File Server file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html). Set the following parameters:
+ `--file-system-id` to the ID of the file system that you want to update.
+ `--storage-type` to SSD. You can't switch from SSD storage type to HDD storage type.

You can monitor the progress of the update by using the AWS CLI command [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html). Look for the `administrative-actions` in the output. 

For more information, see [AdministrativeAction](https://docs.aws.amazon.com/fsx/latest/APIReference/API_AdministrativeAction.html).

# Monitoring storage type updates


After you update your file system's storage type from HDD to SSD storage, you can monitor the progress of the storage type update using the Amazon FSx console, the AWS CLI, or the API, as described in the following procedures.

## Monitoring file system updates in the console


On the **Updates** tab in the **File system details** window, you can view the 10 most recent updates for each update type.

For storage type updates, you can view the following information.

****Update type****  
Possible value is **Storage type**.

****Target value****  
**SSD**

****Status****  
The current status of the update. For storage type updates, the possible values are as follows:  
+ **Pending** – Amazon FSx received the update request, but has not started processing it.
+ **In progress** – Amazon FSx is processing the update request.
+ **Updated optimizing** – The SSD storage performance is available for write operations. The update enters an **Updated optimizing** state, which typically lasts a few hours, during which read operations will have performance levels between HDD and SSD. Once your update action is complete, your new SSD performance is available for both reads and writes.
+ **Completed** – The storage type update completed successfully.
+ **Failed** – The storage type update failed. Choose the question mark (**?**) to see details.

****Progress %****  
Displays the progress of the storage optimization process by the percentage that's complete.

****Request time****  
The time that Amazon FSx received the update action request.

## Monitoring updates with the AWS CLI and API


You can view and monitor file system storage type update requests using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command and the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API action. The `AdministrativeActions` array lists the 10 most recent update actions for each administrative action type. When you increase a file system's SSD IOPS, two `AdministrativeActions` are generated: a `FILE_SYSTEM_UPDATE` and a `STORAGE_TYPE_OPTIMIZATION` action. 

# Updating a file system's SSD IOPS
Updating the SSD IOPS

For file systems configured with SSD storage, the level of provisioned SSD IOPS determines the amount of disk I/O available when your file system has to read data from and write data to disk, as opposed to reading or writing data that is in cache. You can update SSD IOPS for a file system using the Amazon FSx console, the AWS CLI, or the Amazon FSx API, as described in the following procedures. For more information about managing SSD IOPS, see [Managing SSD IOPS](managing-storage-configuration.md#managing-provisioned-ssd-iops).

## To update SSD IOPS for a file system (console)


1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems** and choose the Windows file system that you want to update SSD IOPS for.

1. Under **Actions**, choose **Update SSD IOPS**. Or, in the **Summary** panel, select the **Update** button next to **Provisioned SSD IOPS**. The **Update IOPS provisioning** window opens.

1. For **Mode**, choose **Automatic** or **User-provisioned**. If you choose **Automatic**, Amazon FSx automatically provisions 3 SSD IOPS per GiB of storage capacity for your file system. If you choose **User-provisioned**, enter any whole number in the range of 96–400,000.

1. Choose **Update** to initiate the provisioned SSD IOPS update.

1. You can monitor the update progress on the **File systems** detail page, on the **Updates** tab.

## To update SSD IOPS for a file system (CLI)


To update SSD IOPS for an FSx for Windows File Server file system, use the `--windows-configuration DiskIopsConfiguration` property. This property has two parameters, `Iops` and `Mode`:
+ If you want to specify the number of SSD IOPS, use `Iops=number_of_IOPS`, up to a maximum of 400,000 in supported AWS Regions and `Mode=USER_PROVISIONED`.
+ If you want Amazon FSx to increase your SSD IOPS automatically, use `Mode=AUTOMATIC` and don't use the `Iops` parameter. Amazon FSx automatically maintains 3 SSD IOPS per GiB of storage capacity on your file system, up to a maximum of 400,000 in supported AWS Regions.

You can monitor the progress of the update by using the AWS CLI command [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html). Look for the `administrative-actions` in the output. 

For more information, see [AdministrativeAction](https://docs.aws.amazon.com/fsx/latest/APIReference/API_AdministrativeAction.html).

# Monitoring provisioned SSD IOPS updates


After you update the amount of provisioned SSD IOPS for your file system, you can monitor the progress of the SSD IOPS update using the Amazon FSx console, the AWS CLI, and the API, as described in the following procedures.

## Monitoring updates in the console


In the **Updates** tab in the **File system details** window, you can view the 10 most recent updates for each update type.

For provisioned SSD IOPS updates, you can view the following information.

****Update type****  
Possible values are **IOPS Mode** and **SSD IOPS**.

****Target value****  
The desired value to update the file system's IOPS mode and SSD IOPS to.

****Status****  
The current status of the update. For SSD IOPS updates, the possible values are as follows:  
+ **Pending** – Amazon FSx has received the update request, but has not started processing it.
+ **In progress** – Amazon FSx is processing the update request.
+ **Updated optimizing** – The new IOPS level is available for your workload's write operations. Your update enters an **Updated optimizing** state, which typically lasts a few hours, during which your workload's read operations have IOPS performance between the previous level and the new level. After your update action is complete, your new IOPS level is available for both reads and writes.
+ **Completed** – The SSD IOPS update completed successfully.
+ **Failed** – The SSD IOPS update failed. Choose the question mark (**?**) to see details on why the storage update failed.

****Progress %****  
Displays the progress of the storage optimization process as percent complete.

****Request time****  
The time that Amazon FSx received the update action request.

## Monitoring updates with the AWS CLI and API


You can view and monitor file system SSD IOPS update requests using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command and the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API action. The `AdministrativeActions` array lists the 10 most recent update actions for each administrative action type. When you increase a file system's SSD IOPS, two `AdministrativeActions` are generated: a `FILE_SYSTEM_UPDATE` and an `IOPS_OPTIMIZATION` action. 

# Managing data deduplication


You can manage your file system's [data deduplication settings](managing-storage-configuration.md#using-data-dedup) using the Amazon FSx CLI for remote management on PowerShell. For more information about using the Amazon FSx CLI remote management on PowerShell, see [Using the Amazon FSx CLI for PowerShell](administering-file-systems.md#remote-pwrshell). 

Following are commands that you can use for data deduplication. 


| Data deduplication command | Description | 
| --- | --- | 
| **[Enable-FSxDedup](#enable-dedup)** | Enables data deduplication on the file share. Data compression after deduplication is enabled by default when you enable data deduplication. | 
| **Disable-FSxDedup** | Disables data deduplication on the file share. | 
| **Get-FSxDedupConfiguration** | Retrieves deduplication configuration information, including Minimum file size and age for optimization, compression settings, and Excluded file types and folders. | 
| **Set-FSxDedupConfiguration** | Changes the deduplication configuration settings, including minimum file size and age for optimization, compression settings, and excluded file types and folders. | 
| **[Get-FSxDedupStatus](#get-dedup-status)** | Retrieve the deduplication status, and include read-only properties that describe optimization savings and status on the file system, times, and completion status for the last dedup jobs on the file system. | 
| **Get-FSxDedupMetadata** | Retrieves deduplication optimization metadata. | 
| **Update-FSxDedupStatus** | Computes and retrieves updated data deduplication savings information. | 
| **Measure-FSxDedupFileMetadata** | Measures and retrieves the potential storage space that you can reclaim on your file system if you delete a group of folders. Files often have chunks that are shared across other folders, and the deduplication engine calculates which chunks are unique and would be deleted. | 
| **Get-FSxDedupSchedule** | Retrieves deduplication schedules that are currently defined. | 
| **[New-FSxDedupSchedule](#new-dedup-sched)** | Create and customize a data deduplication schedule. | 
| **[Set-FSxDedupSchedule](#set-dedup-sched)** | Change configuration settings for existing data deduplication schedules. | 
| **Remove-FSxDedupSchedule** | Delete a deduplication schedule. | 
| **Get-FSxDedupJob** | Get status and information for all currently running or queued deduplication jobs. | 
| **Stop-FSxDedupJob** | Cancel one or more specified data deduplication jobs. | 

The online help for each command provides a reference of all command options. To access this help, run the command with **-?**, for example **Enable-FSxDedup -?**. 

## Enabling data deduplication


You enable data deduplication on an Amazon FSx for Windows File Server file share using the `Enable-FSxDedup` command, as follows.

```
PS C:\Users\Admin> Invoke-Command -ComputerName amznfsxzzzzzzzz.corp.example.com -ConfigurationName FSxRemoteAdmin -ScriptBlock {Enable-FsxDedup }
```

When you enable data deduplication, a default schedule and configuration are created. You can create, modify, and remove schedules and configurations using the commands below.

You can use the `Disable-FSxDedup` command to disable data deduplication entirely on your file system.

## Creating a data deduplication schedule


Although the default schedule works well in most cases, you can create a new deduplication schedule by using the `New-FsxDedupSchedule` command, shown as follows. Data deduplication schedules use UTC time.

```
PS C:\Users\Admin> Invoke-Command -ComputerName amznfsxzzzzzzzz.corp.example.com -ConfigurationName FSxRemoteAdmin -ScriptBlock {   
New-FSxDedupSchedule -Name "CustomOptimization" -Type Optimization -Days Mon,Wed,Sat -Start 08:00 -DurationHours 7
}
```

 This command creates a schedule named `CustomOptimization` that runs on days Monday, Wednesday, and Saturday, starting the job at 8:00 am (UTC) each day, with a maximum duration of 7 hours, after which the job stops if it is still running.

Note that creating new, custom deduplication job schedules does not override or remove the existing default schedule. Before creating a custom deduplication job, you may want to disable the default job if you don’t need it.

You can disable the default deduplication schedule by using the `Set-FsxDedupSchedule` command, shown as follows.

```
PS C:\Users\Admin> Invoke-Command -ComputerName amznfsxzzzzzzzz.corp.example.com -ConfigurationName FSxRemoteAdmin -ScriptBlock {Set-FSxDedupSchedule -Name “BackgroundOptimization” -Enabled $false}
```

You can remove a deduplication schedule by using the `Remove-FSxDedupSchedule -Name "ScheduleName"` command. Note that the default `BackgroundOptimization` deduplication schedule cannot be modified or removed and will need to be disabled instead.

## Modifying a data deduplication schedule


You can modify an existing deduplication schedule by using the `Set-FsxDedupSchedule` command, shown as follows.

```
PS C:\Users\Admin> Invoke-Command -ComputerName amznfsxzzzzzzzz.corp.example.com -ConfigurationName FSxRemoteAdmin -ScriptBlock {   
Set-FSxDedupSchedule -Name "CustomOptimization" -Type Optimization -Days Mon,Tues,Wed,Sat -Start 09:00 -DurationHours 9
}
```

 This command modifies the existing `CustomOptimization` schedule to run on days Monday to Wednesday and Saturday, starting the job at 9:00 am (UTC) each day, with a maximum duration of 9 hours, after which the job stops if it is still running. 

 To modify the minimum file age before optimizing setting, use the `Set-FSxDedupConfiguration` command. 

## Viewing the amount of saved space


To view the amount of disk space you are saving from running data deduplication, use the `Get-FSxDedupStatus` command, as follows.

```
PS C:\Users\Admin> Invoke-Command -ComputerName amznfsxzzzzzzzz.corp.example.com -ConfigurationName FsxRemoteAdmin -ScriptBlock { 
Get-FSxDedupStatus } | select OptimizedFilesCount,OptimizedFilesSize,SavedSpace,OptimizedFilesSavingsRate

OptimizedFilesCount OptimizedFilesSize SavedSpace OptimizedFilesSavingsRate
------------------- ------------------ ---------- -------------------------
              12587           31163594   25944826                        83
```

**Note**  
The values shown in the command response for following parameters are not reliable, and you should not use these values: Capacity, FreeSpace, UsedSpace, UnoptimizedSize, and SavingsRate.

# Troubleshooting data deduplication


Use the following information to help troubleshoot some common issues when configuring and using data deduplication.

**Topics**
+ [

## Data deduplication is not working
](#data-dedup-not-working)
+ [

## Deduplication values are unexpectedly set to 0
](#data-dedup-stopped)
+ [

## Space is not freed up on file system after deleting files
](#data-dedup-freed-space)

## Data deduplication is not working


To see the current status of data deduplication, run the `Get-FSxDedupStatus` PowerShell command to view the completion status for the most recent deduplication jobs. If one or more jobs is failing, you may not see an increase in free storage capacity on your file system.

The most common reason for deduplication jobs failing is insufficient memory.
+ Microsoft [recommends](https://docs.microsoft.com/en-us/windows-server/storage/data-deduplication/install-enable#faq) optimally having 1 GB of memory per 1 TB of logical data (or at a minimum 350 MB per 1 TB of logical data). Use the [Amazon FSx performance table](performance.md#performance-table) to determine the memory associated with your file system's throughput capacity and ensure the memory resources are sufficient for the size of your data. If it is not, you need to [increase the file system's throughput capacity](managing-throughput-capacity.md) to the level that meets the memory requirements of 1 GB per 1 TB of logical data.
+ Deduplication jobs are configured with the Windows recommended default of 25% memory allocation, which means that for a file system with 32 GB of memory, 8 GB will be available for deduplication. The memory allocation is configurable (using the `Set-FSxDedupSchedule` command with parameter `–Memory`). Be aware that using a higher memory allocation for dedup may impact file system performance.
+ You can modify the configuration of deduplication jobs to reduce the amount of memory required. For example, you can constrain the optimization to run on specific file types or folders, or set a minimum file size and age for optimization. We also recommend configuring deduplication jobs to run during idle periods when there is minimal load on your file system.

You may also see errors if deduplication jobs have insufficient time to complete. You may need to change the maximum duration of jobs, as described in [Modifying a data deduplication schedule](managing-data-dedup.md#set-dedup-sched).

If deduplication jobs have been failing for a long period of time, and there have been changes to the data on the file system during this period, subsequent deduplication jobs may require more resources to complete successfully for the first time.

## Deduplication values are unexpectedly set to 0


The values for `SavedSpace` and `OptimizedFilesSavingsRate` are unexpectedly 0 for a file system on which you have configured data deduplication.

This can occur during the storage optimization process when you increase the file system's storage capacity. When you increase a file system's storage capacity, Amazon FSx cancels existing data deduplication jobs during the storage optimization process, which migrates data from the old disks to the new, larger disks. Amazon FSx resumes data deduplication on the file system once the storage optimization job completes. For more information about increasing storage capacity and storage optimization, see [Managing storage capacity](managing-storage-configuration.md#managing-storage-capacity).

## Space is not freed up on file system after deleting files


The expected behavior of data deduplication is that if the data that was deleted was something that dedup had saved space on, then the space is not actually freed up on your file system until the garbage collection job runs.

A practice you may find helpful is to set the schedule to run the garbage collection job right after you delete a large number of files. After the garbage collection job finishes, you can set the garbage collection schedule back to its original settings. This ensures you can quickly see the space from your deletions immediately.

Use the following procedure to set the garbage collection job to run in 5 minutes.

1. To verify that data deduplication is enabled, use the `Get-FSxDedupStatus` command. For more information on the command and its expected output, see [Viewing the amount of saved space](managing-data-dedup.md#get-dedup-status).

1. Use the following to set the schedule to run the garbage collection job 5 minutes from now.

   ```
   $FiveMinutesFromNowUTC = ((get-date).AddMinutes(5)).ToUniversalTime()
   $DayOfWeek = $FiveMinutesFromNowUTC.DayOfWeek
   $Time = $FiveMinutesFromNowUTC.ToString("HH:mm")
   
   Invoke-Command -ComputerName ${RPS_ENDPOINT} -ConfigurationName FSxRemoteAdmin -ScriptBlock {   
       Set-FSxDedupSchedule -Name "WeeklyGarbageCollection" -Days $Using:DayOfWeek -Start $Using:Time -DurationHours 9
   }
   ```

1. After the garbage collection job has run and the space has been freed up, set the schedule back to its original settings.