

# What is Amazon Elastic File System?


Amazon Elastic File System (Amazon EFS) provides serverless, fully elastic file storage so that you can share file data without provisioning or managing storage capacity and performance. Amazon EFS is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files. Because Amazon EFS has a simple web services interface, you can create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations. 

Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol, so the applications and tools that you use today work seamlessly with Amazon EFS. Amazon EFS is accessible across most types of Amazon Web Services compute instances, including Amazon EC2, Amazon ECS, Amazon EKS, AWS Lambda, and AWS Fargate. 

The service is designed to be highly scalable, highly available, and highly durable. Amazon EFS offers the following file system types to meet your availability and durability needs:
+ *Regional* (Recommended) – Regional file systems (recommended) store data redundantly across multiple geographically separated Availability Zones within the same AWS Region. Storing data across multiple Availability Zones provides continuous availability to the data, even when one or more Availability Zones in an AWS Region are unavailable.
+ *One Zone* – One Zone file systems store data within a single Availability Zone. Storing data in a single Availability Zone provides continuous availability to the data. In the unlikely case of the loss or damage to all or part of the Availability Zone, however, data that is stored in these types of file systems might be lost.

For more information about file system types, see [EFS file system types](features.md#file-system-type).

Amazon EFS provides the throughput, IOPS, and low latency needed for a broad range of workloads. EFS file systems can grow to petabyte scale, drive high levels of throughput, and allow massively parallel access from compute instances to your data. For most workloads, we recommend using the default modes, which are the General Purpose performance mode and the Elastic throughput modes.
+ *General Purpose* – The General Purpose performance mode is ideal for latency-sensitive applications, like web-serving environments, content-management systems, home directories, and general file serving. 
+ *Elastic* – The Elastic throughput mode is designed to automatically scale throughput performance up or down to meet the needs of your workload activity.

For more information about EFS performance and throughput modes, see [Amazon EFS performance specifications](performance.md). 

Amazon EFS provides file-system-access semantics, such as strong data consistency and file locking. For more information, see [Data consistency in Amazon EFS](features.md#consistency). Amazon EFS also supports controlling access to your file systems through Portable Operating System Interface (POSIX) permissions. For more information, see [Securing your data in Amazon EFS](security-considerations.md).

Amazon EFS supports authentication, authorization, and encryption capabilities to help you meet your security and compliance requirements. Amazon EFS supports two forms of encryption for file systems: encryption in transit and encryption at rest. You can enable encryption at rest when creating an EFS file system. If you do, all of your data and metadata is encrypted. You can enable encryption in transit when you mount the file system. NFS client access to Amazon EFS is controlled by both AWS Identity and Access Management (IAM) policies and network security policies, such as security groups. For more information, see [Data encryption in Amazon EFS](encryption.md), [Identity and access management for Amazon EFS](security-iam.md), and [Controlling network access to EFS file systems for NFS clients](NFS-access-control-efs.md). 

**Note**  
Using Amazon EFS with Microsoft Windows–based Amazon EC2 instances is not supported.

## Are you a first-time user of Amazon EFS?


 If you are a first-time user of Amazon EFS, we recommend that you read the following sections in order:

1. For an Amazon EFS product and pricing overview, see [Amazon EFS](https://aws.amazon.com/efs/).

1. For an Amazon EFS technical overview, see [How Amazon EFS works](how-it-works.md). 

1. Try the [Getting started](getting-started.md) exercise.

If you want to learn more about Amazon EFS, the following topics discuss the service in greater detail:
+ [Creating and managing EFS resources](creating-using.md)
+ [Managing EFS file systems](managing.md)
+ [Amazon EFS API](api-reference.md)



# How Amazon EFS works
How it works

Amazon Elastic File System (EFS) provides a simple, serverless, set-and-forget elastic file system. With Amazon EFS, you can create a file system, mount the file system on an Amazon EC2 instance, and then read and write data to and from your file system. You can mount an EFS file system in your virtual private cloud (VPC), through the Network File System versions 4.0 and 4.1 (NFSv4) protocol. We recommend using a current generation Linux NFSv4.1 client, such as those found in the latest Amazon Linux, Amazon Linux 2, Red Hat, Ubuntu, and macOS Big Sur AMIs, in conjunction with the EFS mount helper. For instructions, see [Installing the Amazon EFS client](using-amazon-efs-utils.md).

For a list of Amazon EC2 Linux and macOS Amazon Machine Images (AMIs) that support this protocol, see [NFS support](mounting-fs-old.md#mounting-fs-nfs-info). For some AMIs, you must install an NFS client to mount your file system on your Amazon EC2 instance. For instructions, see [Installing the NFS client](mounting-fs-install-nfsclient.md).

You can access your EFS file system concurrently from multiple NFS clients, so applications that scale beyond a single connection can access a file system. Amazon EC2 and other AWS compute instances running in multiple Availability Zones within the same AWS Region can access the file system, so that many users can access and share a common data source.

For a list of AWS Regions where you can create an EFS file system, see the [Amazon Web Services General Reference](https://docs.aws.amazon.com/general/latest/gr/rande.html#elasticfilesystem_region). 

To access your EFS file system in a VPC, you create one or more mount targets in the VPC. A *mount target* provides an IP address for an NFSv4 endpoint at which you can mount an EFS file system. You mount your file system using its Domain Name Service (DNS) name, which resolves to the IP address of the EFS mount target in the same Availability Zone as your EC2 instance. You can create one mount target in each Availability Zone in an AWS Region. If there are multiple subnets in an Availability Zone in your VPC, you create a mount target in one of the subnets. Then all EC2 instances in that Availability Zone share that mount target.

**Note**  
An EFS file system can have mount targets in only one VPC at a time.

Mount targets themselves are designed to be highly available. As you design for high availability and failover to other Availability Zones, keep in mind that while the IP addresses and DNS for your mount targets in each Availability Zone are static, they are redundant components backed by multiple resources. For more information about mount targets, see [Managing mount targets](accessing-fs.md).

After mounting the file system by using its DNS name, you use it like any other file system. For information about NFS-level permissions and related considerations, see [Network File System (NFS) level users, groups, and permissions](accessing-fs-nfs-permissions.md). 

You can mount your EFS file systems on your on-premises data center servers when connected to your Amazon VPC with AWS Direct Connect or Site-to-Site VPN. You can mount your EFS file systems on on-premises servers to migrate datasets to EFS, enable cloud bursting scenarios, or back up your on-premises data to Amazon EFS.

Following, you can find a description about how Amazon EFS works with other services.

**Topics**
+ [

## How Amazon EFS works with Amazon EC2
](#how-it-works-ec2)
+ [

## How Amazon EFS works with AWS Direct Connect and AWS Managed VPN
](#how-it-works-direct-connect)
+ [

## How Amazon EFS works with AWS Backup
](#how-it-works-backups)

## How Amazon EFS works with Amazon EC2


This section explains how Amazon EFS Regional and One Zone file systems are mounted to EC2 instances in an Amazon VPC. 

### Regional EFS file systems


The following illustration shows multiple EC2 instances accessing an Amazon EFS file system that is configured for multiple Availability Zones in an AWS Region.

![\[Regional file system with mount targets in three Availability Zones within a VPC on EC2 instances.\]](http://docs.aws.amazon.com/efs/latest/ug/images/efs-ec2-how-it-works-Regional_china-world.png)


In this illustration, the virtual private cloud (VPC) has three Availability Zones. Because the file system is Regional, a mount target was created in each Availability Zone. We recommend that you access the file system from a mount target within the same Availability Zone for performance and cost reasons. One of the Availability Zones has two subnets. However, a mount target is created in only one of the subnets. For more information, see [Mounting EFS file systems using the EFS mount helper](efs-mount-helper.md).

### One Zone EFS file systems


The following illustration shows multiple EC2 instances accessing a One Zone file system from different Availability Zones in a single AWS Region.

![\[One Zone file system with a single mount target created in the same Availability Zone.\]](http://docs.aws.amazon.com/efs/latest/ug/images/efs-ec2-how-it-works-OneZone.png)


In this illustration, the VPC has two Availability Zones, each with one subnet. Because the file system type is One Zone, it can only have a single mount target. For better performance and cost, we recommend that you access the file system from a mount target in the same Availability Zone as the EC2 instance that you're mounting it on.

In this example, the EC2 instance in the us-west-2c Availability Zone will pay EC2 data access charges for accessing a mount target in a different Availability Zone. For more information, see [Mounting One Zone file systems](mounting-one-zone.md).

## How Amazon EFS works with AWS Direct Connect and AWS Managed VPN


By using an Amazon EFS file system mounted on an on-premises server, you can migrate on-premises data into the AWS Cloud hosted in an Amazon EFS file system. You can also take advantage of bursting. In other words, you can move data from your on-premises servers into Amazon EFS and analyze it on a fleet of Amazon EC2 instances in your Amazon VPC. You can then store the results permanently in your file system or move the results back to your on-premises server.

Keep the following considerations in mind when using Amazon EFS with an on-premises server:
+ Your on-premises server must have a Linux-based operating system. We recommend Linux kernel version 4.0 or later.
+ For the sake of simplicity, we recommend mounting an Amazon EFS file system on an on-premises server using a mount target IP address instead of a DNS name.

There is no additional cost for on-premises access to your Amazon EFS file systems. You are charged for the Direct Connect connection to your Amazon VPC. For more information, see [Direct Connect pricing](https://aws.amazon.com/directconnect/pricing/).

The following illustration shows an example of how to access an Amazon EFS file system from on-premises (the on-premises servers have the file systems mounted).

![\[Mount an EFS file system on an on-premises client when using Direct Connect.\]](http://docs.aws.amazon.com/efs/latest/ug/images/efs-directconnect-how-it-works.png)


You can use any mount target in your VPC if you can reach that mount target's subnet by using an Direct Connect connection between your on-premises server and VPC. To access Amazon EFS from an on-premises server, add a rule to your mount target security group to allow inbound traffic to the NFS port (2049) from your on-premises server. For more information, including detailed procedures, see [Prerequisites](mounting-fs-mount-helper-direct.md#efs-onpremises).

## How Amazon EFS works with AWS Backup


For a comprehensive backup implementation for your file systems, you can use Amazon EFS with AWS Backup. AWS Backup is a fully managed backup service that makes it easy to centralize and automate data backup across AWS services in the cloud and on-premises. Using AWS Backup, you can centrally configure backup policies and monitor backup activity for your AWS resources. Amazon EFS always prioritizes file system operations over backup operations. To learn more about backing up EFS file systems using AWS Backup, see [Backing up EFS file systems](awsbackup.md).

# Features of Amazon EFS
Features

Following are features of Amazon EFS.

**Topics**
+ [

## Authentication and access control
](#auth-access-intro)
+ [

## Data consistency in Amazon EFS
](#consistency)
+ [

## Availability and durability of EFS file systems
](#availability-durability)
+ [

## Replication
](#how-efs-replication-works)

## Authentication and access control


You must have valid credentials to use the Amazon EFS management console and to make Amazon EFS API requests, such as create a file system. In addition, you must also have permissions to create or access other EFS and AWS resources.

Users and roles that you create in AWS Identity and Access Management (IAM) must be granted permissions to create or access resources. For more information about permissions, see [Identity and access management for Amazon EFS](security-iam.md).

IAM authorization for NFS clients is an additional security option for Amazon EFS that uses IAM to simplify access management for Network File System (NFS) clients at scale. With IAM authorization for NFS clients, you can use IAM to manage access to an EFS file system in an inherently scalable way. IAM authorization for NFS clients is also optimized for cloud environments. For more information on using IAM authorization for NFS clients, see [Using IAM to control access to file systems](iam-access-control-nfs-efs.md).

## Data consistency in Amazon EFS


Amazon EFS provides the close-to-open consistency semantics that applications expect from NFS.

In Amazon EFS, write operations for Regional file systems are durably stored across Availability Zones in these situations:
+ An application performs a synchronous write operation (for example, using the `open` Linux command with the `O_DIRECT` flag, or the `fsync` Linux command).
+ An application closes a file.

Depending on the access pattern, Amazon EFS can provide stronger consistency guarantees than close-to-open semantics. Applications that perform synchronous data access and perform non-appending writes have read-after-write consistency for data access.

### File locking


NFS client applications can use NFS version 4 file locking (including byte-range locking) for read and write operations on Amazon EFS files.

Remember the following about how Amazon EFS locks files:
+ Amazon EFS only supports advisory locking and read/write operations don’t check for conflicting locks before executing. For example, to avoid file synchronization issues with atomic operations, your application must be aware of NFS semantics (such as close-to-open consistency).
+ Any one particular file can have up to 512 locks across all instances connected and users accessing the file.

## Availability and durability of EFS file systems
Availability and durability

This section describes the file system types and storage class options for Amazon Elastic File System (Amazon EFS) file systems.

### EFS file system types
EFS file system types

Amazon EFS offers Regional and One Zone file system types. 
+ **Regional** – Regional file systems (recommended) store data redundantly across multiple geographically separated Availability Zones within the same AWS Region. Storing data across multiple Availability Zones provides continuous availability to the data, even when one or more Availability Zones in an AWS Region are unavailable.
+ **One Zone** – One Zone file systems store data within a single Availability Zone. Storing data in a single Availability Zone provides continuous availability to the data. In the unlikely case of the loss or damage to all or part of the Availability Zone, however, data that is stored in these types of file systems might be lost.

  In the unlikely case of the loss or damage to all or part of an AWS Availability Zone, data in a One Zone storage class may be lost. For example, events like fire and water damage could result in data loss. Apart from these types of events, our One Zone storage classes use similar engineering designs as our Regional storage classes to protect objects from independent disk, host, and rack-level failures, and each are designed to deliver 99.999999999% data durability.

  For added data protection, Amazon EFS automatically backs up One Zone file systems with AWS Backup. You can restore file system backups to any operational Availability Zone within an AWS Region, or you can restore them to a different AWS Region. EFS file system backups that are created and managed using AWS Backup are replicated to three Availability Zones and are designed for durability. For more information, see [Resilience in AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/disaster-recovery-resiliency.html).
**Note**  
One Zone file systems are available to only certain Availability Zones. For a table that lists the Availability Zones in which you can use One Zone file systems, see [Supported Availability Zones for One Zone file systems](#OneZoneAZs). 

The following table compares the file system types, including their availability, durability, and other considerations.


| File system type | Designed for | Durability (designed for) | Availability | Availability Zones | Other considerations | 
| --- | --- | --- | --- | --- | --- | 
|  Regional  |  Data requiring the highest durability and availability.  |  99.999999999% (11 9s)  |  99.99%  |  >=3  |  None  | 
|  One Zone  |  Data that doesn't require the highest durability and availability.  |  99.999999999% (11 9s)  |  99.99%  |  1  | Not resilient to the loss of the Availability Zone | 

### Supported Availability Zones for One Zone file systems
Supported Availability Zones for One Zone file systems

One Zone file systems are available to only certain Availability Zones. The following table lists the AWS Region and the AZ IDs for each Availability Zone in which you can use One Zone file systems. To see the mapping of AZ IDs to Availability Zones in your account, see [Availability Zone IDs for your AWS Resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html) in the *AWS Resource Access Manager User Guide*.


**Availability Zones that support One Zone file systems**  

| AWS Region Name | AWS Region Code | Supported AZ IDs | 
| --- | --- | --- | 
| US East (Ohio) | us-east-2 |  use2-az1, use2-az2, use2-az3  | 
| US East (N. Virginia) | us-east-1 |  use1-az1, use1-az2, use1-az4, use1-az5, use1-az6  | 
| US West (N. California) | us-west-1 | usw1-az1, usw1-az3 | 
| US West (Oregon) | us-west-2 | usw2-az1, usw2-az2, usw2-az3, usw2-az4 | 
| Africa (Cape Town)  | af-south-1 | afs1-az1,afs1-az2,afs1-az3  | 
| Asia Pacific (Hong Kong) | ap-east-1 | ape1-az1, ape1-az2,ape1-az3  | 
| Asia Pacific (Mumbai) | ap-south-1 | aps1-az1, aps1-az2, aps1-az3  | 
| Asia Pacific (Osaka) | ap-northeast-3 | apne3-az1, apne3-az2, apne3-az3  | 
| Asia Pacific (Seoul) | ap-northeast-2 | apne2-az1, apne2-az2, apne2-az3  | 
| Asia Pacific (Singapore) | ap-southeast-1 | apse1-az1, apse1-az2  | 
| Asia Pacific (Sydney) | ap-southeast-2 | apse2-az1, apse2-az2, apse2-az3  | 
| Asia Pacific (Tokyo) | ap-northeast-1 | apne1-az1,apne1-az4  | 
| Canada (Central) | ca-central-1 | cac1-az1, cac1-az2 | 
| China (Beijing) | cn-north-1 | cnn1-az1, cnn1-az2 | 
| China (Ningxia) | cn-northwest-1 | cnnw1-az1, cnnw1-az2, cnnw1-az3 | 
| Europe (Frankfurt) | eu-central-1 | euc1-az1, euc1-az2, euc1-az3 | 
| Europe (Ireland) | eu-west-1 | euw1-az1, euw1-az2, euw1-az3 | 
| Europe (London) | eu-west-2 | euw2-az1, euw2-az2 | 
| Europe (Milan) | eu-south-1 | eus1-az1, eus1-az2, eus1-az3  | 
| Europe (Paris) | eu-west-3 | euw3-az1, euw3-az3 | 
| Europe (Stockholm) | eu-north-1 | eun1-az1, eun1-az2, eun1-az3  | 
| Middle East (Bahrain) | me-south-1 | mes1-az1, mes1-az2, mes1-az3 | 
| South America (São Paulo) | sa-east-1 | sae1-az1, sae1-az2, sae1-az3  | 
| AWS GovCloud (US-East) | us-gov-east-1 | usge1-az1, usge1-az2, usge1-az3 | 
| AWS GovCloud (US-West) | us-gov-west-1 | usgw1-az1, usgw1-az2, usgw1-az3  | 

### EFS storage classes
EFS storage classes

Amazon EFS offers different storage classes that are designed for the most effective storage depending on use cases.
+ **EFS Standard** – The EFS Standard storage class uses solid state drive (SSD) storage to deliver the lowest levels of latency for frequently accessed files. New file system data is first written to the EFS Standard storage class and then can be tiered to the EFS Infrequent Access and EFS Archive storage classes by using lifecycle management.
+ **EFS Infrequent Access (IA)** – A cost-optimized storage class for data that is accessed only a few times each quarter.
+ **EFS Archive** – A cost-optimized storage class for data that is accessed a few times each year or less.

  The EFS Archive storage class is supported on EFS file systems with Elastic throughput. You cannot update your file system’s throughput to Bursting or Provisioned once the file system has data in the Archive storage class. 

#### Comparing storage classes
Compare storage classes

The following table compares the storage classes. For more details about the performance of each storage class, see [Amazon EFS performance specifications](performance.md).

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/efs/latest/ug/features.html)

1Because One Zone file systems store data in a single AWS Availability Zone, data that is stored in these types of file systems might be lost in the event of a disaster or other fault that affects all copies of the data within the Availability Zone, or in the event of Availability Zone destruction.

2Lifecycle policies updated on or after 12 PM PT, November 26, 2023 will tier files that are smaller than 128 KiB to the IA class. For more information about how Amazon EFS meters and bills for individual files and metadata, see [How Amazon EFS reports file system and object sizes](metered-sizes.md).

#### Storage class billing


You are billed for the amount of data in each storage class. You are also billed data access charges when files in IA or Archive storage are read, or for data that transitions between storage classes using lifecycle management. The AWS bill displays the capacity for each storage class and the metered access against the file system's storage class. To learn more, see [Amazon EFS Pricing](https://aws.amazon.com/efs/pricing).

Additionally, Infrequent Access (IA) and Archive storage classes have a minimum billing charge per file of 128 KiB. Support for files smaller than 128 KiB is only available for lifecycle policies updated on or after 12:00 PM PT, November 26, 2023. For more information on how Amazon EFS meters and bills for individual files and metadata, see [How Amazon EFS reports file system and object sizes](metered-sizes.md).

Additional billing considerations apply depending on throughput mode. 
+ For file systems using Elastic throughput, you are billed for the total monthly amount of metadata and data transferred for your file systems, independent of the storage charges. 
+ For file systems using Provisioned throughput, you are billed for the throughput provisioned above what you are provided based on the amount of data that is in the EFS Standard storage class.
+ For file systems using Bursting throughput, the allowed throughput is determined based on the amount of the data stored in the EFS Standard storage class only. 

For more information about EFS throughput modes, see [Throughput modes](performance.md#throughput-modes). 

**Note**  
You don't incur data access charges when using AWS Backup to back up lifecycle management-enabled EFS file systems. To learn more about AWS Backup with Amazon EFS, see [Backing up EFS file systems](awsbackup.md).

### Lifecycle management


To manage your file systems so that they are stored cost effectively throughout their lifecycle, use lifecycle management. Lifecycle management automatically transitions data between storage classes according to the lifecycle configuration defined for the file system. The lifecycle configuration is a set of *lifecycle policies* that define when to transition the file system data to another storage class. For more information, see [Managing storage lifecycle](lifecycle-management-efs.md).

## Replication


You can create a replica of your Amazon EFS file system in the AWS Region of your preference using replication. Replication automatically and transparently replicates the data and metadata on your EFS file system to a new destination EFS file system that is created in an AWS Region that you choose. EFS automatically keeps the source and destination file systems synchronized. Replication is continual and designed to provide a recovery point objective (RPO) and a recovery time objective (RTO) of minutes. These features assist you in meeting your compliance and business continuity goals. For more information, see [Replicating EFS file systems](efs-replication.md).