

# Amazon EBS volumes
EBS volumes

An Amazon EBS volume is a durable, block-level storage device that you can attach to your instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. EBS volumes are flexible. For current-generation volumes attached to current-generation instance types, you can dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes.

You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application. You can also use them for throughput-intensive applications that perform continuous disk scans. EBS volumes persist independently from the running life of an EC2 instance.

You can attach multiple EBS volumes to a single instance. The volume and instance must be in the same Availability Zone. Depending on the volume and instance types, you can use [Multi-Attach](ebs-volumes-multi.md) to mount a volume to multiple instances at the same time.

Amazon EBS provides the following volume types: General Purpose SSD (`gp2` and `gp3`), Provisioned IOPS SSD (`io1` and `io2`), Throughput Optimized HDD (`st1`), Cold HDD (`sc1`), and Magnetic (`standard`). They differ in performance characteristics and price, allowing you to tailor your storage performance and cost to the needs of your applications. For more information, see [Amazon EBS volume types](ebs-volume-types.md).

Your account has a limit on the total storage available to you. For more information about these limits, and how to request an increase in your limits, see [Amazon EBS endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ebs-service.html#limits_ebs).

A *managed EBS volume* is managed by a service provider, such as Amazon EKS Auto Mode. You can’t directly modify the settings of a managed EBS volume. Managed EBS volumes are identified by a **true** value in the **Managed** field. For more information, see [Amazon EC2 managed instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-ec2-managed-instances.html).

For more information about pricing, see [Amazon EBS Pricing](https://aws.amazon.com/ebs/pricing/).

**Topics**
+ [

# Features and benefits of Amazon EBS volumes
](EBSFeatures.md)
+ [

# Amazon EBS volume types
](ebs-volume-types.md)
+ [

# Amazon EBS volume constraints
](volume_constraints.md)
+ [

# Amazon EBS volumes and NVMe
](nvme-ebs-volumes.md)
+ [

# Amazon EBS volume lifecycle
](ebs-volume-lifecycle.md)
+ [

# Replace an Amazon EBS volume using a snapshot
](ebs-restoring-volume.md)
+ [

# Amazon EBS volume status checks
](monitoring-volume-checks.md)
+ [

# Fault testing on Amazon EBS
](ebs-fis.md)

# Features and benefits of Amazon EBS volumes
Features and benefits

EBS volumes provide benefits that are not provided by instance store volumes.

**Topics**
+ [

## Data availability
](#availability-benefit)
+ [

## Data persistence
](#persistence-benefit)
+ [

## Data encryption
](#encryption-benefit)
+ [

## Data security
](#security-benefit)
+ [

## Snapshots
](#backup-benefit)
+ [

## Flexibility
](#flexibility-benefit)

## Data availability


When you create an EBS volume, it is automatically replicated within its Availability Zone to prevent data loss due to failure of any single hardware component. You can attach an EBS volume to any EC2 instance in the same Availability Zone. After you attach a volume, it appears as a native block device similar to a hard drive or other physical device. At that point, the instance can interact with the volume just as it would with a local drive. You can connect to the instance and format the EBS volume with a file system, such as `Ext4` for a Linux instance or `NTFS` for a Windows instance, and then install applications. 

If you attach multiple volumes to a device that you have named, you can stripe data across the volumes for increased I/O and throughput performance.

You can attach `io1` and `io2` EBS volumes to up to 16 Nitro-based instances. For more information, see [Attach an EBS volume to multiple EC2 instances using Multi-Attach](ebs-volumes-multi.md). Otherwise, you can attach an EBS volume to a single instance.

You can get monitoring data for your EBS volumes, including root device volumes for EBS-backed instances, at no additional charge. For more information about monitoring metrics, see [Amazon CloudWatch metrics for Amazon EBS](using_cloudwatch_ebs.md). For information about tracking the status of your volumes, see [Amazon EventBridge events for Amazon EBS](ebs-cloud-watch-events.md).

## Data persistence


An EBS volume is off-instance storage that can persist independently from the life of an instance. You continue to pay for the volume usage as long as the data persists. 

EBS volumes that are attached to a running instance can automatically detach from the instance with their data intact when the instance is terminated if you uncheck the **Delete on Termination** check box when you configure EBS volumes for your instance on the EC2 console. The volume can then be reattached to a new instance, enabling quick recovery. If the check box for **Delete on Termination** is checked, the volume(s) will delete upon termination of the EC2 instance. If you are using an EBS-backed instance, you can stop and restart that instance without affecting the data stored in the attached volume. The volume remains attached throughout the stop-start cycle. This enables you to process and store the data on your volume indefinitely, only using the processing and storage resources when required. The data persists on the volume until the volume is deleted explicitly. The physical block storage used by deleted EBS volumes is overwritten with zeroes or cryptographically pseudorandom data before it is allocated to a new volume. If you are dealing with sensitive data, you should consider encrypting your data manually or storing the data on a volume protected by Amazon EBS encryption. For more information, see [Amazon EBS encryption](ebs-encryption.md).

By default, the root EBS volume that is created and attached to an instance at launch is deleted when that instance is terminated. You can modify this behavior by changing the value of the flag `DeleteOnTermination` to `false` when you launch the instance. This modified value causes the volume to persist even after the instance is terminated, and enables you to attach the volume to another instance. 

By default, additional EBS volumes that are created and attached to an instance at launch are not deleted when that instance is terminated. You can modify this behavior by changing the value of the flag `DeleteOnTermination` to `true` when you launch the instance. This modified value causes the volumes to be deleted when the instance is terminated. 

## Data encryption


For simplified data encryption, you can create encrypted EBS volumes with the Amazon EBS encryption feature. All EBS volume types support encryption. You can use encrypted EBS volumes to meet a wide range of data-at-rest encryption requirements for regulated/audited data and applications. Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256) and an Amazon-managed key infrastructure. The encryption occurs on the server that hosts the EC2 instance, providing encryption of data-in-transit from the EC2 instance to Amazon EBS storage. For more information, see [Amazon EBS encryption](ebs-encryption.md). 

 Amazon EBS encryption uses AWS KMS keys when creating encrypted volumes and any snapshots created from your encrypted volumes. The first time you create an encrypted EBS volume in a Region, a default AWS managed KMS key is created for you automatically. This key is used for Amazon EBS encryption unless you create and use a customer managed key. Creating your own customer managed key gives you more flexibility, including the ability to create, rotate, disable, define access controls, and audit the encryption keys used to protect your data. For more information, see the [AWS Key Management Service Developer Guide](https://docs.aws.amazon.com/kms/latest/developerguide/).

## Data security


Amazon EBS volumes are presented to you as raw, unformatted block devices. These devices are logical devices that are created on the EBS infrastructure and the Amazon EBS service ensures that the devices are logically empty (that is, the raw blocks are zeroed or they contain cryptographically pseudorandom data) prior to any use or re-use by a customer.

If you have procedures that require that all data be erased using a specific method, either after or before use (or both), such as those detailed in **DoD 5220.22-M** (National Industrial Security Program Operating Manual) or **NIST 800-88** (Guidelines for Media Sanitization), you have the ability to do so on Amazon EBS. That block-level activity will be reflected down to the underlying storage media within the Amazon EBS service.

## Snapshots


Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon S3, where it is stored redundantly in multiple Availability Zones. The volume does not need to be attached to a running instance in order to take a snapshot. As you continue to write data to a volume, you can periodically create a snapshot of the volume to use as a baseline for new volumes. These snapshots can be used to create multiple new EBS volumes or move volumes across Availability Zones. Snapshots of encrypted EBS volumes are automatically encrypted. 

When you create a new volume from a snapshot, it's an exact copy of the original volume at the time the snapshot was taken. EBS volumes that are created from encrypted snapshots are automatically encrypted. By optionally specifying a different Availability Zone, you can use this functionality to create a duplicate volume in that zone. The snapshots can be shared with specific AWS accounts or made public. When you create snapshots, you incur charges in Amazon S3 based on the size of the data being backed up, not the size of the source volume. Subsequent snapshots of the same volume are incremental snapshots. They include only changed and new data written to the volume since the last snapshot was created, and you are charged only for this changed and new data.

Snapshots are incremental backups, meaning that only the blocks on the volume that have changed after your most recent snapshot are saved. If you have a volume with 100 GiB of data, but only 5 GiB of data have changed since your last snapshot, only the 5 GiB of modified data is written to Amazon S3. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot.

To help categorize and manage your volumes and snapshots, you can tag them with metadata of your choice.

To back up your volumes automatically, you can use [Amazon Data Lifecycle Manager](snapshot-lifecycle.md) or [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/).

## Flexibility


EBS volumes support live configuration changes while in production. You can modify volume type, volume size, and IOPS capacity without service interruptions. For more information, see [Modify an Amazon EBS volume using Elastic Volumes operations](ebs-modify-volume.md).

# Amazon EBS volume types
EBS volume types

Amazon EBS provides the following volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. 

**Important**  
There are several factors that can affect the performance of EBS volumes, such as instance configuration, I/O characteristics, and workload demand. To fully use the IOPS provisioned on an EBS volume, use [EBS–optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html). For more information about getting the most out of your EBS volumes, see [Amazon EBS volume performance](ebs-performance.md).

For more information about pricing, see [Amazon EBS Pricing](https://aws.amazon.com/ebs/pricing/).

**Volume types**
+ [Solid state drive (SSD) volumes](#vol-type-ssd)
+ [Hard disk drive (HDD) volumes](#vol-type-hdd)
+ [Previous generation volumes](#vol-type-prev)

## Solid state drive (SSD) volumes


SSD-backed volumes are optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. SSD-backed volume types include **General Purpose SSD** and **Provisioned IOPS SSD **. The following is a summary of the use cases and characteristics of SSD-backed volumes.


|  | [Amazon EBS General Purpose SSD volumes](general-purpose.md) | [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md) | 
| --- | --- | --- | 
| Volume type | gp3 6 | gp2 | io2 Block Express | io1 | 
| Durability | 99.8% - 99.9% durability (0.1% - 0.2% annual failure rate) | 99.999% durability (0.001% annual failure rate) | 99.8% - 99.9% durability (0.1% - 0.2% annual failure rate) | 
| Use cases |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  |  Workloads that require: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  | 
| Volume size | 1 GiB - 64 TiB  | 1 GiB - 16 TiB  | 4 GiB - 64 TiB  | 4 GiB - 16 TiB  | 
| Max IOPS | 80,000 3 (64 KiB I/O 4) | 16,000 (16 KiB I/O 4) | 256,000 3 (16 KiB I/O 4)  | 64,000 (16 KiB I/O 4) | 
| Max throughput | 2,000 MiB/s | 250 MiB/s 1 | 4,000 MiB/s | 1,000 MiB/s 2 | 
| Amazon EBS Multi-attach | Not supported | Supported | 
| NVMe reservations | Not supported | Supported | Not supported | 
| Boot volume | Supported | 

1 The throughput limit is between 128 MiB/s and 250 MiB/s, depending on the volume size. For more information, see [`gp2` volume performance](general-purpose.md#gp2-performance). Volumes created before **December 3, 2018** that have not been modified since creation might not reach full performance unless you [modify the volume](ebs-modify-volume.md).

2 To achieve maximum throughput of 1,000 MiB/s, the volume must be provisioned with 64,000 IOPS and it must be attached to a [ Nitro-based instance](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). Volumes created before **December 6, 2017** that have not been modified since creation might not reach full performance unless you [modify the volume](ebs-modify-volume.md).

3 [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) support volumes provisioned with up to 256,000 IOPS. Other instance types can be attached to volumes provisioned with up to 64,000 IOPS, but can achieve up to 32,000 IOPS.

4 Represents the required I/O size to reach maximum IOPS within the volume's throughput limit.

5 `io2` Block Express volumes are designed to deliver an average latency of under 500 microseconds for 16KiB I/O operations.

6 On Outposts, gp3 volumes support sizes up to 16 TiB, IOPS up to 16,000, and throughput up to 1,000 MiB/s.

For more information about the SSD-backed volume types, see the following:
+ [Amazon EBS General Purpose SSD volumes](general-purpose.md)
+ [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md)

## Hard disk drive (HDD) volumes


HDD-backed volumes are optimized for large streaming workloads where the dominant performance attribute is throughput. HDD volume types include ** Throughput Optimized HDD** and **Cold HDD**. The following is a summary of the use cases and characteristics of HDD-backed volumes.


|  | [Throughput Optimized HDD volumes](hdd-vols.md#EBSVolumeTypes_st1) | [Cold HDD volumes](hdd-vols.md#EBSVolumeTypes_sc1) | 
| --- | --- | --- | 
| Volume type | st1 | sc1 | 
| Durability | 99.8% - 99.9% durability (0.1% - 0.2% annual failure rate) | 
| Use cases |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  | 
| Volume size | 125 GiB - 16 TiB | 
| Max IOPS per volume (1 MiB I/O) | 500 | 250 | 
| Max throughput per volume | 500 MiB/s | 250 MiB/s | 
| Amazon EBS Multi-attach | Not supported | 
| Boot volume | Not supported | 

For more information about the Hard disk drives (HDD) volumes, see [Amazon EBS Throughput Optimized HDD and Cold HDD volumes](hdd-vols.md).

## Previous generation volumes


Magnetic (`standard`) volumes are previous generation volumes that are backed by magnetic drives. They are suited for workloads with small datasets where data is accessed infrequently and performance is not of primary importance. These volumes deliver approximately 100 IOPS on average, with burst capability of up to hundreds of IOPS, and they can range in size from 1 GiB to 1 TiB.

**Tip**  
Magnetic is a previous generation volume type. If you need higher performance or performance consistency than previous-generation volumes can provide, we recommend using one of the current generation volume types.

The following table describes previous-generation EBS volume types.


|  | Magnetic | 
| --- | --- | 
| Volume type | standard | 
| Use cases | Workloads where data is infrequently accessed | 
| Volume size | 1 GiB-1 TiB | 
| Max IOPS per volume | 40–200 | 
| Max throughput per volume | 40–90 MiB/s | 
| Boot volume | Supported | 

# Amazon EBS General Purpose SSD volumes
General Purpose SSD volumes

General Purpose SSD (gp2 and gp3) volumes are backed by solid-state drives (SSDs). They balance price and performance for a wide variety of transactional workloads. These include virtual desktops, medium-sized single instance databases, latency sensitive interactive applications, development and test environments, and boot volumes. We recommend these volumes for most workloads.

Amazon EBS offers the following types of General Purpose SSD volumes:

**Topics**
+ [

## General Purpose SSD (gp3) volumes
](#gp3-ebs-volume-type)
+ [

## General Purpose SSD (gp2) volumes
](#EBSVolumeTypes_gp2)

## General Purpose SSD (gp3) volumes


General Purpose SSD (gp3) volumes are the latest generation of General Purpose SSD volumes, and the lowest cost SSD volume offered by Amazon EBS. This volume type helps to provide the right balance of price and performance for most applications. It also helps you to scale volume performance independently of volume size. This means that you can provision the required performance without needing to provision additional block storage capacity. Additionally, gp3 volumes offer a 20 percent lower price per GiB than General Purpose SSD (gp2) volumes.

gp3 volumes provide single-digit millisecond latency and 99.8 percent to 99.9 percent volume durability with an annual failure rate (AFR) no higher than 0.2 percent, which translates to a maximum of two volume failures per 1,000 running volumes over a one-year period. AWS designs gp3 volumes to deliver their provisioned performance 99 percent of the time.

**Tip**  
For latency-sensitive workloads, we recommend that you use io2 Block Express volumes. `io2` Block Express volumes are designed to deliver an average latency of under 500 microseconds for 16KiB I/O operations. `io2` Block Express volumes also deliver better outlier latency compared to General Purpose volumes, reducing the frequency of I/Os exceeding 800 microseconds by over 10 times. For more information, see [Provisioned IOPS SSD (`io2`) Block Express volumes](provisioned-iops.md#io2-block-express).

**Topics**
+ [

### gp3 volume performance
](#gp3-performance)
+ [

### gp3 volume size
](#gp3-sie)
+ [

### Migrate to gp3 from gp2
](#migrate-to-gp3)

### gp3 volume performance


**Tip**  
gp3 volumes do not use burst performance. They can indefinitely sustain their full provisioned IOPS and throughput performance.

**IOPS performance**  
gp3 volumes deliver a consistent baseline IOPS performance of 3,000 IOPS, which is included with the price of storage. You can provision additional IOPS (up to a maximum of 80,000) for an additional cost at a ratio of 500 IOPS per GiB of volume size. Maximum IOPS can be provisioned for volumes 160 GiB or larger (500 IOPS per GiB × 160 GiB = 80,000 IOPS).

**Throughput performance**  
gp3 volumes deliver a consistent baseline throughput performance of 125 MiB/s, which is included with the price of storage. You can provision additional throughput (up to a maximum of 2,000 MiB/s) for an additional cost at a ratio of 0.25 MiB/s per provisioned IOPS. Maximum throughput can be provisioned at 8,000 IOPS or higher and 16 GiB or larger (8,000 IOPS × 0.25 MiB/s per IOPS = 2,000 MiB/s).

**Note**  
On Outposts, gp3 volumes support sizes up to 16 TiB, IOPS up to 16,000, and throughput up to 1,000 MiB/s.

### gp3 volume size


A gp3 volume can range in size from 1 GiB to 64 TiB.

### Migrate to gp3 from gp2


If you are currently using gp2 volumes, you can migrate your volumes to gp3 using [Modify an Amazon EBS volume using Elastic Volumes operations](ebs-modify-volume.md) operations. You can use Amazon EBS Elastic Volumes operations to modify the volume type, IOPS, and throughput of your existing volumes without interrupting your Amazon EC2 instances. When using the console to create a volume or to create an AMI from a snapshot, General Purpose SSD `gp3` is the default selection for volume type. In other cases, `gp2` is the default selection. In these cases, you can select `gp3` as the volume type instead of using `gp2`.

To find out how much you can save by migrating your gp2 volumes to gp3, use the [Amazon EBS gp2 to gp3 migration cost savings calculator](https://d1.awsstatic.com/product-marketing/Storage/EBS/gp2_gp3_CostOptimizer.dd5eac2187ef7678f4922fcc3d96982992964ba5.xlsx).

## General Purpose SSD (gp2) volumes


They offer cost-effective storage that is ideal for a broad range of transactional workloads. With `gp2` volumes, performance scales with volume size.

**Tip**  
`gp3` volumes are the latest generation of General Purpose SSD volumes. They offer more predictable performance scaling and prices that are up to 20 percent lower than `gp2` volumes. For more information, see [General Purpose SSD (gp3) volumes](#gp3-ebs-volume-type).   
To find out how much you can save by migrating your `gp2` volumes to `gp3`, use the [Amazon EBS gp2 to gp3 migration cost savings calculator](https://d1.awsstatic.com/product-marketing/Storage/EBS/gp2_gp3_CostOptimizer.dd5eac2187ef7678f4922fcc3d96982992964ba5.xlsx).

`gp2` volumes provide single-digit millisecond latency and 99.8 percent to 99.9 percent volume durability with an annual failure rate (AFR) no higher than 0.2 percent, which translates to a maximum of two volume failures per 1,000 running volumes over a one-year period. AWS designs `gp2` volumes to deliver their provisioned performance 99 percent of the time.

**Topics**
+ [

### `gp2` volume performance
](#gp2-performance)
+ [

### `gp2` volume size
](#gp2-size)

### `gp2` volume performance


**IOPS performance**  
Baseline IOPS performance scales linearly between a minimum of 100 and a maximum of 16,000 at a rate of 3 IOPS per GiB of volume size. IOPS performance is provisioned as follows:
+ Volumes 33.33 GiB and smaller are provisioned with the minimum of 100 IOPS.
+ Volumes larger than 33.33 GiB are provisioned with 3 IOPS per GiB of volume size up to the maximum of 16,000 IOPS, which is reached at 5,334 GiB (3 X 5,334).
+ Volumes 5,334 GiB and larger are provisioned with 16,000 IOPS.

`gp2` volumes smaller than 1 TiB (and that are provisioned with less than 3,000 IOPS) can **burst** to 3,000 IOPS when needed for an extended period of time. A volume's ability to burst is governed by I/O credits. When I/O demand is greater than baseline performance, the volume **spends I/O credits** to burst to the required performance level (up to 3,000 IOPS). While bursting, I/O credits are not accumulated and they are spent at the rate of IOPS that is being used above baseline IOPS (spend rate = burst IOPS - baseline IOPS). The more I/O credits a volume has accrued, the longer it can sustain its burst performance. You can calculate **Burst duration** as follows:

```
                        (I/O credit balance)
Burst duration  =  ------------------------------
                   (Burst IOPS) - (Baseline IOPS)
```

When I/O demand drops to baseline performance level or lower, the volume starts to **earn I/O credits** at a rate of 3 I/O credits per GiB of volume size per second. Volumes have an **I/O credit accrual limit** of 5.4 million I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for at least 30 minutes.

**Note**  
Each volume receives an initial I/O credit balance of 5.4 million I/O credits, which provides a fast initial boot cycle for boot volumes and a good bootstrapping experience for other applications.

The following table lists example volume sizes and the associated baseline performance of the volume, the burst duration (when starting with 5.4 million I/O credits), and the time needed to refill an empty I/O credits balance.


| Volume size (GiB) | Baseline performance (IOPS) | Burst duration at 3,000 IOPS (seconds) | Time to refill empty credit balance (seconds) | 
| --- | --- | --- | --- | 
|  1 to 33.33  |  100  |  1,862  | 54,000 | 
|  100  |  300  |  2,000  | 18,000 | 
|  334 (min size for max throughput)  | 1,002 |  2,703  |  5,389  | 
|  750  |  2,250  |  7,200  | 2,400 | 
|  1,000  |  3,000  |  N/A\$1  |  N/A\$1  | 
|  5,334 (min size for max IOPS) and larger  |  16,000  |  N/A\$1  |  N/A\$1  | 

\$1 The baseline performance of the volume exceeds the maximum burst performance.

You can monitor the I/O credit balance for a volume using the Amazon EBS `BurstBalance` metric in Amazon CloudWatch. This metric shows the percentage of I/O credits for `gp2` remaining. For more information, see [Amazon EBS I/O characteristics and monitoring](ebs-io-characteristics.md). You can set an alarm that notifies you when the `BurstBalance` value falls to a certain level. For more information, see [ Creating CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).

**Throughput performance**  


`gp2` volumes deliver throughput between 128 MiB/s and 250 MiB/s, depending on the volume size. Throughput performance is provisioned as follows:
+ Volumes that are 170 GiB and smaller deliver a maximum throughput of 128 MiB/s.
+ Volumes larger than 170 GiB but smaller than 334 GiB can burst to a maximum throughput of 250 MiB/s.
+ Volumes that are 334 GiB and larger deliver 250 MiB/s.

Throughput for a `gp2` volume can be calculated using the following formula, up to the throughput limit of 250 MiB/s:

```
Throughput in MiB/s = IOPS performance × I/O size in KiB / 1,024
```

### `gp2` volume size


A `gp2` volume can range in size from 1 GiB to 16 TiB. Keep in mind that volume performance scales linearly with the volume size.

# Amazon EBS Provisioned IOPS SSD volumes
Provisioned IOPS SSD volumes

Provisioned IOPS SSD volumes are backed by solid-state drives (SSDs). They are the highest performance Amazon EBS storage volumes designed for critical, IOPS-intensive, and throughput-intensive workloads that require low latency. Provisioned IOPS SSD volumes deliver their provisioned IOPS performance 99.9 percent of the time.

**Topics**
+ [

## Provisioned IOPS SSD (`io2`) Block Express volumes
](#io2-block-express)
+ [

## Provisioned IOPS SSD (`io1`) volumes
](#EBSVolumeTypes_piops)

## Provisioned IOPS SSD (`io2`) Block Express volumes


`io2` Block Express volumes are built on the next generation of Amazon EBS storage server architecture. It has been built for the purpose of meeting the performance requirements of the most demanding I/O intensive applications that run on [ instances built on the Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). With the highest durability and lowest latency, Block Express is ideal for running performance-intensive, mission-critical workloads, such as Oracle, SAP HANA, Microsoft SQL Server, and SAS Analytics.

Block Express architecture increases performance and scale of `io2` volumes. Block Express servers communicate with [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) using the Scalable Reliable Datagram (SRD) networking protocol. This interface is implemented in the Nitro Card dedicated for Amazon EBS I/O function on the host hardware of the instance. It minimizes I/O delay and latency variation (network jitter), which provides faster and more consistent performance for your applications.

`io2` Block Express volumes are designed to provide 99.999 percent volume durability with an annual failure rate (AFR) no higher than 0.001 percent, which translates to a single volume failure per 100,000 running volumes over a one-year period. `io2` Block Express volumes are suited for workloads that benefit from a single volume that provides consistent sub-millisecond latency, and supports higher IOPS and throughput than gp3 volumes.

When attached to [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html), `io2` Block Express volumes are designed to deliver an average latency of under 500 microseconds for 16KiB I/O operations. `io2` Block Express volumes also deliver better outlier latency compared to General Purpose volumes, reducing the frequency of I/Os exceeding 800 microseconds by over 10 times.

**Topics**
+ [

### Considerations
](#io2-bx-considerations)
+ [

### Performance
](#io2-bx-perf)

### Considerations

+ `io2` Block Express volumes are available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions.
+ As of **April 30, 2025**, all new and previously created `io2` volumes are `io2` Block Express volumes.
+ [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) support volumes provisioned with up to 256,000 IOPS. Other instance types can be attached to volumes provisioned with up to 64,000 IOPS, but can achieve up to 32,000 IOPS.

### Performance


`io2` Block Express volumes have the following characteristics:
+ Average latency under 500 microseconds for 16KiB I/O size. Better outlier latency compared to General Purpose volumes, reducing the frequency of I/Os exceeding 800 microseconds by over 10 times.
+ Storage capacity up to 64 TiB (65,536 GiB).
+ Provisioned IOPS up to 256,000, with an IOPS:GiB ratio of 1,000:1. Maximum IOPS can be provisioned with volumes 256 GiB and larger (1,000 IOPS × 256 GiB = 256,000 IOPS).
**Note**  
You can achieve up to 256,000 IOPS with [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). On other instances, you can achieve up to 32,000 IOPS.
+ Volume throughput up to 4,000 MiB/s. Throughput scales proportionally at a rate of 0.256 MiB/s per provisioned IOPS. Maximum throughput can be achieved at 16,000 IOPS or higher.

![\[Throughput limits for io2 Block Express volumes\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/io2_bx.png)


## Provisioned IOPS SSD (`io1`) volumes


Provisioned IOPS SSD (`io1`) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time.

`io1` volumes are designed to provide 99.8 percent to 99.9 percent volume durability with an annual failure rate (AFR) no higher than 0.2 percent, which translates to a maximum of two volume failures per 1,000 running volumes over a one-year period.

`io1` volumes are available for all Amazon EC2 instance types.

**Performance**  
`io1` volumes can range in size from 4 GiB to 16 TiB and you can provision from 100 IOPS up to 64,000 IOPS per volume. The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1. For example, a 100 GiB `io1` volume can be provisioned with up to 5,000 IOPS.

The maximum IOPS can be provisioned for volumes that are 1,280 GiB or larger (50 × 1,280 GiB = 64,000 IOPS).
+ `io1` volumes provisioned with up to 32,000 IOPS support a maximum I/O size of 256 KiB and yield as much as 500 MiB/s of throughput. With the I/O size at the maximum, peak throughput is reached at 2,000 IOPS.
+ `io1` volumes provisioned with more than 32,000 IOPS (up to the maximum of 64,000 IOPS) yield a linear increase in throughput at a rate of 16 KiB per provisioned IOPS. For example, a volume provisioned with 48,000 IOPS can support up to 750 MiB/s of throughput (16 KiB per provisioned IOPS × 48,000 provisioned IOPS = 750 MiB/s).
+ To achieve the maximum throughput of 1,000 MiB/s, a volume must be provisioned with 64,000 IOPS (16 KiB per provisioned IOPS × 64,000 provisioned IOPS = 1,000 MiB/s).
+ You can achieve up to 64,000 IOPS only on [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). On other instances, you can achieve up to 32,000 IOPS.

. The following graph illustrates these performance characteristics:

![\[Throughput limits for io1 volumes\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/io1_throughput.png)


Your per-I/O latency experience depends on the provisioned IOPS and on your workload profile. For the best I/O latency experience, ensure that you provision IOPS to meet the I/O profile of your workload.

# Amazon EBS Throughput Optimized HDD and Cold HDD volumes
Throughput Optimized HDD and Cold HDD volumes

The HDD-backed volumes provided by Amazon EBS fall into these categories:
+ Throughput Optimized HDD — A low-cost HDD designed for frequently accessed, throughput-intensive workloads.
+ Cold HDD — The lowest-cost HDD designed for less frequently accessed workloads.

**Topics**
+ [

## Limitations on per-instance throughput
](#throughput-limitations)
+ [

## Throughput Optimized HDD volumes
](#EBSVolumeTypes_st1)
+ [

## Cold HDD volumes
](#EBSVolumeTypes_sc1)
+ [

## Performance considerations when using HDD volumes
](#EBSVolumeTypes_considerations)
+ [

## Monitor the burst bucket balance for volumes
](#monitoring_burstbucket-hdd)

## Limitations on per-instance throughput


Throughput for `st1` and `sc1` volumes is always determined by the smaller of the following:
+ Throughput limits of the volume
+ Throughput limits of the instance

As for all Amazon EBS volumes, we recommend that you select an appropriate EBS-optimized EC2 instance to avoid network bottlenecks.

## Throughput Optimized HDD volumes


Throughput Optimized HDD (`st1`) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable `st1` volumes are not supported. 

Throughput Optimized HDD (`st1`) volumes, though similar to Cold HDD (`sc1`) volumes, are designed to support *frequently* accessed data.

**Note**  
This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use [Amazon EBS General Purpose SSD volumes](general-purpose.md) or [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md). For more information, see [Inefficiency of small read/writes on HDD](#inefficiency).

Throughput Optimized HDD (`st1`) volumes attached to EBS-optimized instances are designed to offer consistent performance, delivering at least 90 percent of the expected throughput performance 99 percent of the time in a given year.

### Throughput credits and burst performance


Like `gp2`, `st1` uses a burst bucket model for performance. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available. Larger volumes have higher baseline and burst throughput. The more credits your volume has, the longer it can drive I/O at the burst level.

The following diagram shows the burst bucket behavior for `st1`.

![\[st1 burst bucket\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/st1-burst-bucket.png)


Subject to throughput and throughput-credit caps, the available throughput of an `st1` volume is expressed by the following formula:

```
(Volume size) × (Credit accumulation rate per TiB) = Throughput
```

For a 1-TiB `st1` volume, burst throughput is limited to 250 MiB/s, the bucket fills with credits at 40 MiB/s, and it can hold up to 1 TiB-worth of credits.

Larger volumes scale these limits linearly, with throughput capped at a maximum of 500 MiB/s. After the bucket is depleted, throughput is limited to the baseline rate of 40 MiB/s per TiB. 

On volume sizes ranging from 0.125 TiB to 16 TiB, baseline throughput varies from 5 MiB/s to a cap of 500 MiB/s, which is reached at 12.5 TiB as follows:

```
            40 MiB/s
12.5 TiB × ---------- = 500 MiB/s
             1 TiB
```

Burst throughput varies from 31 MiB/s to a cap of 500 MiB/s, which is reached at 2 TiB as follows:

```
         250 MiB/s
2 TiB × ---------- = 500 MiB/s
          1 TiB
```

The following table states the full range of base and burst throughput values for `st1`.


| Volume size (TiB) | ST1 base throughput (MiB/s) | ST1 burst throughput (MiB/s) | 
| --- | --- | --- | 
| 0.125 | 5 | 31 | 
| 0.5 | 20 | 125 | 
| 1 | 40 | 250 | 
| 2 | 80 | 500 | 
| 3 | 120 | 500 | 
| 4 | 160 | 500 | 
| 5 | 200 | 500 | 
| 6 | 240 | 500 | 
| 7 | 280 | 500 | 
| 8 | 320 | 500 | 
| 9 | 360 | 500 | 
| 10 | 400 | 500 | 
| 11 | 440 | 500 | 
| 12 | 480 | 500 | 
| 12.5 | 500 | 500 | 
| 13 | 500 | 500 | 
| 14 | 500 | 500 | 
| 15 | 500 | 500 | 
| 16 | 500 | 500 | 

The following diagram plots the table values:

![\[Comparing st1 base and burst throughput\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/st1_base_v_burst.png)


**Note**  
When you create a snapshot of a Throughput Optimized HDD (`st1`) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress.

For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitor the burst bucket balance for volumes](#monitoring_burstbucket-hdd).

## Cold HDD volumes


Cold HDD (`sc1`) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than `st1`, `sc1` is a good fit for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, `sc1` provides inexpensive block storage. Bootable `sc1` volumes are not supported.

Cold HDD (`sc1`) volumes, though similar to Throughput Optimized HDD (`st1`) volumes, are designed to support *infrequently* accessed data.

**Note**  
This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use [Amazon EBS General Purpose SSD volumes](general-purpose.md) or [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md). For more information, see [Inefficiency of small read/writes on HDD](#inefficiency).

Cold HDD (`sc1`) volumes attached to EBS-optimized instances are designed to offer consistent performance, delivering at least 90 percent of the expected throughput performance 99 percent of the time in a given year.

### Throughput credits and burst performance


Like `gp2`, `sc1` uses a burst bucket model for performance. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available. Larger volumes have higher baseline and burst throughput. The more credits your volume has, the longer it can drive I/O at the burst level.

![\[sc1 burst bucket\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/sc1-burst-bucket.png)


Subject to throughput and throughput-credit caps, the available throughput of an `sc1` volume is expressed by the following formula:

```
(Volume size) × (Credit accumulation rate per TiB) = Throughput
```

For a 1-TiB `sc1` volume, burst throughput is limited to 80 MiB/s, the bucket fills with credits at 12 MiB/s, and it can hold up to 1 TiB-worth of credits.

Larger volumes scale these limits linearly, with throughput capped at a maximum of 250 MiB/s. After the bucket is depleted, throughput is limited to the baseline rate of 12 MiB/s per TiB. 

On volume sizes ranging from 0.125 TiB to 16 TiB, baseline throughput varies from 1.5 MiB/s to a maximum of 192 MiB/s, which is reached at 16 TiB as follows:

```
           12 MiB/s
16 TiB × ---------- = 192 MiB/s
            1 TiB
```

Burst throughput varies from 10 MiB/s to a cap of 250 MiB/s, which is reached at 3.125 TiB as follows:

```
             80 MiB/s
3.125 TiB × ----------- = 250 MiB/s
              1 TiB
```

The following table states the full range of base and burst throughput values for `sc1`:


| Volume Size (TiB) | SC1 Base Throughput (MiB/s) | SC1 Burst Throughput (MiB/s) | 
| --- | --- | --- | 
| 0.125 | 1.5 | 10 | 
| 0.5 | 6 | 40 | 
| 1 | 12 | 80 | 
| 2 | 24 | 160 | 
| 3 | 36 | 240 | 
| 3.125 | 37.5 | 250 | 
| 4 | 48 | 250 | 
| 5 | 60 | 250 | 
| 6 | 72 | 250 | 
| 7 | 84 | 250 | 
| 8 | 96 | 250 | 
| 9 | 108 | 250 | 
| 10 | 120 | 250 | 
| 11 | 132 | 250 | 
| 12 | 144 | 250 | 
| 13 | 156 | 250 | 
| 14 | 168 | 250 | 
| 15 | 180 | 250 | 
| 16 | 192 | 250 | 

The following diagram plots the table values:

![\[Comparing sc1 base and burst throughput\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/sc1_base_v_burst.png)


**Note**  
When you create a snapshot of a Cold HDD (`sc1`) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress.

For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitor the burst bucket balance for volumes](#monitoring_burstbucket-hdd).

## Performance considerations when using HDD volumes


For optimal throughput results using HDD volumes, plan your workloads with the following considerations in mind.

### **Comparing Throughput Optimized HDD and Cold HDD**


The `st1` and `sc1` bucket sizes vary according to volume size, and a full bucket contains enough tokens for a full volume scan. However, larger `st1` and `sc1` volumes take longer for the volume scan to complete because of per-instance and per-volume throughput limits. Volumes attached to smaller instances are limited to the per-instance throughput rather than the `st1` or `sc1` throughput limits.

Both `st1` and `sc1` are designed for performance consistency of 90 percent of burst throughput 99 percent of the time. Non-compliant periods are approximately uniformly distributed, targeting 99 percent of expected total throughput each hour.

In general, scan times are expressed by this formula:

```
 Volume size
------------ = Scan time
 Throughput
```

For example, taking the performance consistency guarantees and other optimizations into account, an `st1` customer with a 5-TiB volume can expect to complete a full volume scan in 2.91 to 3.27 hours. 
+ Optimal scan time

  ```
     5 TiB            5 TiB
  ----------- = ------------------ = 10,486 seconds = 2.91 hours 
   500 MiB/s     0.00047684 TiB/s
  ```
+ Maximum scan time

  ```
    2.91 hours
  -------------- = 3.27 hours
   (0.90)(0.99) <-- From expected performance of 90% of burst 99% of the time
  ```

Similarly, an `sc1` customer with a 5-TiB volume can expect to complete a full volume scan in 5.83 to 6.54 hours.
+ Optimal scan time

  ```
     5 TiB             5 TiB
  ----------- = ------------------- = 20972 seconds = 5.83 hours 
   250 MiB/s     0.000238418 TiB/s
  ```
+ Maximum scan time

  ```
    5.83 hours
  -------------- = 6.54 hours
   (0.90)(0.99)
  ```

The following table shows ideal scan times for volumes of various size, assuming full buckets and sufficient instance throughput.


| Volume size (TiB) | ST1 scan time with burst (hours)\$1 | SC1 scan time with burst (hours)\$1 | 
| --- | --- | --- | 
| 1 | 1.17 | 3.64 | 
| 2 | 1.17 | 3.64 | 
| 3 | 1.75 | 3.64 | 
| 4 | 2.33 | 4.66 | 
| 5 | 2.91 | 5.83 | 
| 6 | 3.50 | 6.99 | 
| 7 | 4.08 | 8.16 | 
| 8 | 4.66 | 9.32 | 
| 9 | 5.24 | 10.49 | 
| 10 | 5.83 | 11.65 | 
| 11 | 6.41 | 12.82 | 
| 12 | 6.99 | 13.98 | 
| 13 | 7.57 | 15.15 | 
| 14 | 8.16 | 16.31 | 
| 15 | 8.74 | 17.48 | 
| 16 | 9.32 | 18.64 | 

 \$1 These scan times assume an average queue depth (rounded to the nearest whole number) of four or more when performing 1 MiB of sequential I/O.

Therefore if you have a throughput-oriented workload that needs to complete scans quickly (up to 500 MiB/s), or requires several full volume scans a day, use `st1`. If you are optimizing for cost, your data is relatively infrequently accessed, and you don’t need more than 250 MiB/s of scanning performance, then use `sc1`.

### Inefficiency of small read/writes on HDD


The performance model for `st1` and `sc1` volumes is optimized for sequential I/Os, favoring high-throughput workloads, offering acceptable performance on workloads with mixed IOPS and throughput, and discouraging workloads with small, random I/O.

For example, an I/O request of 1 MiB or less counts as a 1 MiB I/O credit. However, if the I/Os are sequential, they are merged into 1 MiB I/O blocks and count only as a 1 MiB I/O credit. 

## Monitor the burst bucket balance for volumes


You can monitor the burst bucket level for `st1` and `sc1` volumes using the Amazon EBS `BurstBalance` metric available in Amazon CloudWatch. This metric shows the throughput credits for `st1` and `sc1` remaining in the burst bucket. For more information about the `BurstBalance` metric and other metrics related to I/O, see [Amazon EBS I/O characteristics and monitoring](ebs-io-characteristics.md). CloudWatch also allows you to set an alarm that notifies you when the `BurstBalance` value falls to a certain level. For more information, see [Creating CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).

# Amazon EBS volume constraints
EBS volume constraints

The size of an Amazon EBS volume is constrained by the physics and arithmetic of block data storage, as well as by the implementation decisions of operating system (OS) and file system designers. AWS imposes additional limits on volume size to safeguard the reliability of its services.

The following sections describe the most important factors that limit the usable size of an EBS volume and offer recommendations for configuring your EBS volumes.

**Topics**
+ [

## Storage capacity
](#ebs-storage-capacity)
+ [

## Service limitations
](#aws_limits)
+ [

## Partitioning schemes
](#partitioning)
+ [

## Data block sizes
](#block_size)

## Storage capacity


The following table summarizes the theoretical and implemented storage capacities for the most commonly used file systems on Amazon EBS, assuming a 4,096 byte block size.


| Partitioning scheme | Max addressable blocks  | Theoretical max size (blocks × block size) | Ext4 implemented max size\$1 | XFS implemented max size\$1\$1 | NTFS implemented max size | Max supported by EBS | 
| --- | --- | --- | --- | --- | --- | --- | 
| MBR | 232 | 2 TiB | 2 TiB | 2 TiB | 2 TiB | 2 TiB | 
| GPT | 264 |  64 ZiB  | 1 EiB =10242 TiB (50 TiB certified on RHEL7) |  500 TiB (certified on RHEL7)  | 256 TiB | 64 TiB † | 

\$1 [Ext4 Howto](https://archive.kernel.org/oldwiki/ext4.wiki.kernel.org/index.php/Ext4_Howto.html) and [What are the file and system size limits for Red Hat Enterprise Linux?](https://access.redhat.com/solutions/1532)

\$1\$1 [What are the file and system size limits for Red Hat Enterprise Linux?](https://access.redhat.com/solutions/1532)

† `io2` Block Express volumes support up to 64 TiB for GPT partitions. For more information, see [Provisioned IOPS SSD (`io2`) Block Express volumes](provisioned-iops.md#io2-block-express).

## Service limitations


Amazon EBS abstracts the massively distributed storage of a data center into virtual hard disk drives. To an operating system installed on an EC2 instance, an attached EBS volume appears to be a physical hard disk drive containing 512-byte disk sectors. The OS manages the allocation of data blocks (or clusters) onto those virtual sectors through its storage management utilities. The allocation is in conformity with a volume partitioning scheme, such as master boot record (MBR) or GUID partition table (GPT), and within the capabilities of the installed file system (ext4, NTFS, and so on). 

EBS is not aware of the data contained in its virtual disk sectors; it only ensures the integrity of the sectors. This means that AWS actions and OS actions are independent of each other. When you are selecting a volume size, be aware of the capabilities and limits of both, as in the following cases: 
+ EBS currently supports a maximum volume size of 64 TiB. This means that you can create an EBS volume as large as 64 TiB, but whether the OS recognizes all of that capacity depends on its own design characteristics and on how the volume is partitioned.
+ Boot volumes must use either the MBR or GPT partitioning scheme. The AMI you launch an instance from determines the boot mode and subsequently the partition scheme used for the boot volume.

  With **MBR**, boot volumes are limited to 2 TiB in size.

  With **GPT**, boot volumes can be up to 64 TiB in size when used with GRUB2 (Linux) or UEFI boot mode (Windows).

  For more information, see [Make an Amazon EBS volume available for use](ebs-using-volumes.md).
+ Non-boot volumes that are 2 TiB (2048 GiB) or larger must use a GPT partition table to access the entire volume. 

## Partitioning schemes


Among other impacts, the partitioning scheme determines how many logical data blocks can be uniquely addressed in a single volume. For more information, see [Data block sizes](#block_size). The common partitioning schemes in use are *Master Boot Record* (MBR) and *GUID partition table* (GPT). The important differences between these schemes can be summarized as follows.

### MBR


MBR uses a 32-bit data structure to store block addresses. This means that each data block is mapped with one of 232 possible integers. The maximum addressable size of a volume is given by the following formula:

```
232 × Block size
```

The block size for MBR volumes is conventionally limited to 512 bytes. Therefore:

```
232 × 512 bytes = 2 TiB
```

Engineering workarounds to increase this 2-TiB limit for MBR volumes have not met with widespread industry adoption. Consequently, Linux and Windows never detect an MBR volume as being larger than 2 TiB even if AWS shows its size to be larger. 

### GPT


GPT uses a 64-bit data structure to store block addresses. This means that each data block is mapped with one of 264 possible integers. The maximum addressable size of a volume is given by the following formula:

```
264 × Block size
```

The block size for GPT volumes is commonly 4,096 bytes. Therefore:

```
264 × 4,096 bytes
   = 264 × 212 bytes
   = 270 × 26 bytes
   = 64 ZiB
```

Real-world computer systems don't support anything close to this theoretical maximum. Implemented file-system size is currently limited to 50 TiB for ext4 and 256 TiB for NTFS.

## Data block sizes


Data storage on a modern hard drive is managed through *logical block addressing*, an abstraction layer that allows the operating system to read and write data in logical blocks without knowing much about the underlying hardware. The operating system relies on the storage device to map the blocks to its physical sectors, and reads and writes data to disk using data blocks that are a multiple of the sector size.

Amazon EBS advertises either 512-byte or 4,096-byte (4 KiB) physical sectors to the operating system, depending on the following factors:

1. The Amazon EC2 instance type

1. The operating system

1. The NVMe driver version

Amazon EBS advertises 4-KiB physical sectors only if all factors support it. If any one of these do not support 4-KiB physical sectors, Amazon EBS advertises 512-byte physical sectors.

**Amazon EC2 instance type support**  
The following table shows the sector sizes that Amazon EBS advertises for the different Amazon EC2 instance types.


| Instance type | Linux | Windows | 
| --- | --- | --- | 
| All Xen-based instance types | Amazon EBS always advertises 512-byte physical sectors | 
| A1 \$1 C5 \$1 C5a \$1 C5ad \$1 C5d \$1 C5n \$1 C6g \$1 C6gd \$1 DL1 \$1 D3 \$1 D3en \$1 G4ad \$1 G4dn \$1 G5 \$1 G5g \$1 I3 \$1 I3en \$1 Inf1 \$1 M5 \$1 M5a \$1 M5ad \$1 M5d \$1 M5dn \$1 M5n \$1 M5zn \$1 M6g \$1 M6gd \$1 P3dn \$1 P4d \$1 P4de \$1 R5 \$1 R5a \$1 R5ad \$1 R5d \$1 R5dn \$1 R5n \$1 R6g \$1 R6gd \$1 T3 \$1 T3a \$1 T4g \$1 U-12tb1 \$1 U-18tb1 \$1 U-24tb1 \$1 U-3tb1 \$1 U-6tb1 \$1 U-9tb1 \$1 X2gd \$1 X2iezn \$1 VT1 \$1 Z1d | Amazon EBS always advertises 512-byte physical sectors | Amazon EBS advertises 512-byte or 4-KiB physical sectors 1 | 
| All other Nitro-based instances | Amazon EBS advertises 512-byte or 4-KiB physical sectors 1 | 

1 Depends on the operating system support. See the following section.

**Operating system support**  
The following table provides example operating systems and the corresponding physical sector sizes advertised by Amazon EBS. This is **not an exhaustive list**. We recommend that you verify the physical sector size advertised by Amazon EBS in your operating system.




| Operating system | Advertised physical sector size | 
| --- | --- | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/volume_constraints.html)  | 512 byte | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/volume_constraints.html)  | 4 KiB | 

1 For Windows workloads, ensure that you are using the latest version of the [AWS NVMe drivers](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/aws-nvme-drivers.html). Amazon EBS advertises 4-KiB physical sectors with AWS NVMe driver version 1.4.1 and later.

### Non-default block sizes


The industry default size for logical data blocks is currently 4 KiB. Because certain workloads benefit from a smaller or larger block size, file systems support non-default block sizes that can be specified during formatting. Scenarios in which non-default block sizes should be used (such as optimizations) are outside the scope of this topic, but the choice of block size has consequences for the storage capacity of the volume. The following table shows theoretical storage capacity as a function of block size. However, note that the EBS-imposed limit on volume size (64 TiB for io2 Block Express) is currently equal to the maximum size enabled by 16-KiB data blocks.


| Block size | Max volume size | 
| --- | --- | 
| 4 KiB (default) | 16 TiB | 
| 8 KiB | 32 TiB | 
| 16 KiB | 64 TiB | 
| 32 KiB | 128 TiB | 
| 64 KiB (maximum) | 256 TiB | 

# Amazon EBS volumes and NVMe
EBS volumes and NVMe

Amazon EBS volumes are exposed as NVMe block devices on Amazon EC2 instances built on the [AWS Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). To fully utilize the performance and capabilities of Amazon EBS volumes exposed as NVMe block devices, the EC2 instance must have the AWS NVMe driver installed. All current generation AWS Windows and Linux AMIs come with the AWS NVMe driver installed by default.

If you use an AMI that does not have the AWS NVMe driver, you can manually install it. For more information, see [AWS NVMe drivers](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/aws-nvme-drivers.html) in the *Amazon EC2 User Guide*.

**Linux instances**  
The device names are `/dev/nvme0n1`, `/dev/nvme1n1`, and so on. The device names that you specify in a block device mapping are renamed using NVMe device names (`/dev/nvme[0-26]n1`). The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping.

**Windows instances**  
When you attach a volume to your instance, you include a device name for the volume. This device name is used by Amazon EC2. The block device driver for the instance assigns the actual volume name when mounting the volume, and the name assigned can be different than the name that Amazon EC2 uses.

**Topics**
+ [

# Map Amazon EBS volumes to NVMe device names
](identify-nvme-ebs-device.md)
+ [

# NVMe I/O operation timeout for Amazon EBS volumes
](timeout-nvme-ebs-volumes.md)
+ [

# NVMe Abort command for Amazon EBS volumes
](abort-command.md)

# Map Amazon EBS volumes to NVMe device names
Map volumes to device names

EBS uses single-root I/O virtualization (SR-IOV) to provide volume attachments on Nitro-based instances using the NVMe specification. These devices rely on standard NVMe drivers on the operating system. These drivers typically discover attached devices during instance boot, and create device nodes based on the order in which the devices respond, not on how the devices are specified in the block device mapping.

## Linux instances


In Linux, NVMe device names follow the pattern `/dev/nvme<x>n<y>`, where <x> is the enumeration order, and, for EBS, <y> is 1. Occasionally, devices can respond to discovery in a different order in subsequent instance starts, which causes the device name to change. Additionally, the device name assigned by the block device driver can be different from the name specified in the block device mapping.

We recommend that you use stable identifiers for your EBS volumes within your instance, such as one of the following:
+ For Nitro-based instances, the block device mappings that are specified in the Amazon EC2 console when you are attaching an EBS volume or during `AttachVolume` or `RunInstances` API calls are captured in the vendor-specific data field of the NVMe controller identification. With Amazon Linux AMIs later than version 2017.09.01, we provide a `udev` rule that reads this data and creates a symbolic link to the block-device mapping.
+ The EBS volume ID and the mount point are stable between instance state changes. The NVMe device name can change depending on the order in which the devices respond during instance boot. We recommend using the EBS volume ID and the mount point for consistent device identification.
+ NVMe EBS volumes have the EBS volume ID set as the serial number in the device identification. Use the `lsblk -o +SERIAL` command to list the serial number.
+ The NVMe device name format can vary depending on whether the EBS volume was attached during or after the instance launch. NVMe device names for volumes attached after instance launch include the `/dev/` prefix, while NVMe device names for volumes attached during instance launch do not include the `/dev/` prefix.
  + For Amazon Linux or FreeBSD AMI, use the `sudo ebsnvme-id /dev/nvme0n1 -u` command for a consistent NVMe device name. 
  + For other distributions, use the `sudo nvme id-ctrl -V /dev/nvme0n1` command to determine the NVMe device name. You might need to include the `--vendor-specific` command option.
+ When a device is formatted, a UUID is generated that persists for the life of the filesystem. A device label can be specified at the same time. For more information, see [Make an Amazon EBS volume available for use](ebs-using-volumes.md) and [ Boot from the wrong volume](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-booting-from-wrong-volume.html).

**Amazon Linux AMIs**  
With Amazon Linux AMI 2017.09.01 or later (including Amazon Linux 2), you can run the **ebsnvme-id** command as follows to map the NVMe device name to a volume ID and device name:

The following example shows the command and output for a volume attached during instance launch. Note that the NVMe device name does not include the `/dev/` prefix.

```
[ec2-user ~]$ sudo /sbin/ebsnvme-id /dev/nvme0n1
Volume ID: vol-01324f611e2463981
sda
```

The following example shows the command and output for a volume attached after instance launch. Note that the NVMe device name includes the `/dev/` prefix.

```
[ec2-user ~]$ sudo /sbin/ebsnvme-id /dev/nvme1n1
Volume ID: vol-064784f1011136656
/dev/sdf
```

Amazon Linux also creates a symbolic link from the device name in the block device mapping (for example, `/dev/sdf`), to the NVMe device name.

**FreeBSD AMIs**  
Starting with FreeBSD 12.2-RELEASE, you can run the **ebsnvme-id** command as shown above. Pass either the name of the NVMe device (for example, `nvme0`) or the disk device (for example, `nvd0` or `nda0`). FreeBSD also creates symbolic links to the disk devices (for example, `/dev/aws/disk/ebs/`*volume\$1id*).

**Other Linux AMIs**  
With a kernel version of 4.2 or later, you can run the **nvme id-ctrl** command as follows to map an NVMe device to a volume ID. First, install the NVMe command line package, `nvme-cli`, using the package management tools for your Linux distribution. For download and installation instructions for other distributions, refer to the documentation specific to your distribution.

The following example gets the volume ID and NVMe device name for a volume that was attached during instance launch. Note that the NVMe device name does not include the `/dev/` prefix. The device name is available through the NVMe controller vendor-specific extension (bytes 384:4095 of the controller identification):

```
[ec2-user ~]$ sudo nvme id-ctrl -V /dev/nvme0n1
NVME Identify Controller:
vid     : 0x1d0f
ssvid   : 0x1d0f
sn      : vol01234567890abcdef
mn      : Amazon Elastic Block Store
...
0000: 2f 64 65 76 2f 73 64 6a 20 20 20 20 20 20 20 20 "sda..."
```

The following example gets the volume ID and NVMe device name for a volume that was attached after instance launch. Note that the NVMe device name includes the `/dev/` prefix.

```
[ec2-user ~]$ sudo nvme id-ctrl -V /dev/nvme1n1
NVME Identify Controller:
vid     : 0x1d0f
ssvid   : 0x1d0f
sn      : volabcdef01234567890
mn      : Amazon Elastic Block Store
...
0000: 2f 64 65 76 2f 73 64 6a 20 20 20 20 20 20 20 20 "/dev/sdf..."
```

The **lsblk** command lists available devices and their mount points (if applicable). This helps you determine the correct device name to use. In this example, `/dev/nvme0n1p1` is mounted as the root device and `/dev/nvme1n1` is attached but not mounted.

```
[ec2-user ~]$ lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1       259:3   0  100G  0 disk
nvme0n1       259:0   0    8G  0 disk
  nvme0n1p1   259:1   0    8G  0 part /
  nvme0n1p128 259:2   0    1M  0 part
```

## Windows instances


You can run the **ebsnvme-id** command to map the NVMe device disk number to an EBS volume ID and device name. By default, all EBS NVMe devices are enumerated. You can pass a disk number to enumerate information for a specific device. The `ebsnvme-id` tool is included in the latest AWS provided Windows Server AMIs located in `C:\ProgramData\Amazon\Tools`.

Starting with AWS NVMe driver package `1.5.0,` the latest version of the `ebsnvme-id` tool is installed by the driver package. The latest version is only available in the driver package. The standalone download link for the `ebsnvme-id` tool will no longer receive updates. The last version available through the standalone link is `1.1.0`, which can be downloaded using the link [ebsnvme-id.zip](https://s3.amazonaws.com/ec2-windows-drivers-downloads/EBSNVMeID/Latest/ebsnvme-id.zip) and extracting the contents to your Amazon EC2 instance to get access to `ebsnvme-id.exe`.

```
PS C:\ProgramData\Amazon\Tools> ebsnvme-id.exe
Disk Number: 0
Volume ID: vol-0d6d7ee9f6e471a7f
Device Name: sda1

Disk Number: 1
Volume ID: vol-03a26248ff39b57cf
Device Name: xvdd

Disk Number: 2
Volume ID: vol-038bd1c629aa125e6
Device Name: xvde

Disk Number: 3
Volume ID: vol-034f9d29ec0b64c89
Device Name: xvdb

Disk Number: 4
Volume ID: vol-03e2dbe464b66f0a1
Device Name: xvdc
```

```
PS C:\ProgramData\Amazon\Tools> ebsnvme-id.exe 4
Disk Number: 4
Volume ID: vol-03e2dbe464b66f0a1
Device Name: xvdc
```

# NVMe I/O operation timeout for Amazon EBS volumes
I/O operation timeout

Most operating systems specify a timeout for I/O operations submitted to NVMe devices.

**Linux instances**  
On Linux, EBS volumes attached to Nitro-based instances use the default NVMe driver provided by the operating system. Most operating systems specify a timeout for I/O operations submitted to NVMe devices. The default timeout is 30 seconds and can be changed using the `nvme_core.io_timeout` boot parameter. For most Linux kernels earlier than version 4.6, this parameter is `nvme.io_timeout`.

If I/O latency exceeds the value of this timeout parameter, the Linux NVMe driver fails the I/O and returns an error to the filesystem or application. Depending on the I/O operation, your filesystem or application can retry the error. In some cases, your filesystem might be remounted as read-only.

For an experience similar to EBS volumes attached to Xen instances, we recommend setting `nvme_core.io_timeout` to the highest value possible. For current kernels, the maximum is 4294967295, while for earlier kernels the maximum is 255. Depending on the version of Linux, the timeout might already be set to the supported maximum value. For example, the timeout is set to 4294967295 by default for Amazon Linux AMI 2017.09.01 and later.

You can verify the maximum value for your Linux distribution by writing a value higher than the suggested maximum to `/sys/module/nvme_core/parameters/io_timeout` and checking for the Numerical result out of range error when attempting to save the file.

**Windows instances**  
On Windows, the default timeout is 60 seconds and the maximum is 255 seconds. You can modify the `TimeoutValue` disk class registry setting using the procedure described in [Registry Entries for SCSI Miniport Drivers](https://learn.microsoft.com/en-us/previous-versions/windows/drivers/storage/registry-entries-for-scsi-miniport-drivers).

# NVMe Abort command for Amazon EBS volumes
Abort command

The `Abort` command is an NVMe admin command that is issued to end a specific command that was previously submitted to the controller. This command is typically issued by the device driver to storage devices that have exceeded the I/O operation timeout threshold. 

Amazon EC2 instance types that support the `Abort` command by default will end a specific command that was previously submitted to the controller when an `Abort` command is issued to attached Amazon EBS volumes. Amazon EC2 instances that do not support the `Abort` command take no action when an `Abort` command is issued to attached Amazon EBS volumes.

The `Abort` command is supported with:
+ Amazon EBS devices with NVMe device version 1.4 or higher.
+ All Amazon EC2 instances, **except** Xen-based instances types and the following Nitro-based instance types:
  + General purpose: A1 \$1 M5 \$1 M5a \$1 M5ad \$1 M5d \$1 M5dn \$1 M5n \$1 M5zn \$1 M6g \$1 M6gd \$1 Mac1 \$1 Mac2 \$1 T3 \$1 T3a \$1 T4g
  + Compute optimized: C5 \$1 c5a \$1 C5ad \$1 C5d \$1 C5n \$1 C6g \$1 C6gd
  + Memory optimized: R5 \$1 R5a \$1 R5ad \$1 R5d \$1 R5dn \$1 R5n \$1 R6g \$1 R6gd \$1 U-12tb1 \$1 U-18tb1 \$1 U-24tb1 \$1 U-3tb1 \$1 U-6tb1 \$1 U-9tb1 \$1 X2gd \$1 X2iezn \$1 Z1d
  + Storage optimized: D3 \$1 D3en \$1 I3en
  + Accelerated computing: DL1 \$1 G4ad \$1 G4dn \$1 G5 \$1 G5g \$1 Inf1 \$1 P3dn \$1 P4d \$1 P4de \$1 VT1

For more information, see section **5.1 Abort command** of the [ NVM Express Base Specification](https://nvmexpress.org/wp-content/uploads/NVM-Express-1_4-2019.06.10-Ratified.pdf).

# Amazon EBS volume lifecycle
Volume lifecycle

The lifecycle of an Amazon EBS volume starts with the creation process. You can create a volume from an Amazon EBS snapshot or you can create an empty volume. Before you can use your volume, you must attach it to one or more Amazon EC2 instances that are in the same Availability Zone as the volume. You can attach multiple volumes to an instance. If needed, you can detach a volume from one instance and then attach it to another instance. If your storage requirements change, you can modify the size or performance of the volume at any time. You can create point-in-time backups of your volumes by creating Amazon EBS snapshots. If you no longer need a volume, you can delete it to stop incurring the related storage costs.

The following image shows actions that you can perform on your volumes as part of the volume lifecycle. There are also tasks that you perform by connecting to the instance and running an operating system command. For example, formatting the volume, mounting the volume, managing partitions, and viewing the free disk space.

![\[The lifecycle of an EBS volume.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/volume-lifecycle.png)


**Topics**
+ [Create a volume](ebs-creating-volume.md)
+ [Copy a volume](ebs-copying-volume.md)
+ [Attach a volume to an instance](ebs-attaching-volume.md)
+ [Attach a volume to multiple instances](ebs-volumes-multi.md)
+ [Make a volume available for use](ebs-using-volumes.md)
+ [View volume details](ebs-describing-volumes.md)
+ [Modify a volume](ebs-modify-volume.md)
+ [Detach a volume from an instance](ebs-detaching-volume.md)
+ [Delete a volume](ebs-deleting-volume.md)

# Create an Amazon EBS volume
Create a volume

You can create an Amazon EBS volume and then attach it to any EC2 instance in the same Availability Zone.

You can either **create an empty volume**, or you can **create a volume from an Amazon EBS snapshot**. If you create a volume from a snapshot, the volume begins as an exact replica of the volume that was used to create that snapshot.

**Volume initialization**  
When you create a volume from a snapshot, the storage blocks from the snapshot must be downloaded from Amazon S3 and written to the volume before you can access them. This process is called volume initialization. During this time, the volume will experience increased I/O latency. Full volume performance is achieved only after all storage blocks have been downloaded and written to the volume.

The default volume initialization rate fluctuates throughout the initialization process, which could make completion times unpredictable.

To minimize the performance impacts associated with volume initialization, you can use an Amazon EBS Provisioned Rate for Volume Initialization (volume initialization rate) or fast snapshot restore. For more information, see [Initialize Amazon EBS volumes](initalize-volume.md).

**Volume encryption**  
The encryption state of the volume depends on whether your account is [enabled for encryption by default](encryption-by-default.md), and on the encryption state of the snapshot, if you choose to use one. The following table summarizes thepossible encryption outcomes.


| Encryption by default | Snapshot used? | Volume encryption outcome | Note | 
| --- | --- | --- | --- | 
| Disabled | No | Optional encryption | If you enable encryption, you can specify the KMS key to use. If you enable encryption but do not specify a KMS key, the AWS managed key (aws/ebs) is used. | 
| Disabled | Yes, unencrypted | Optional encryption | If you enable encryption, you can specify the KMS key to use. If you enable encryption but do not specify a KMS key, the AWS managed key (aws/ebs) is used. | 
| Disabled | Yes, encrypted | Automatic encryption | You can specify the KMS key to use. If you do not specify a KMS key, the volume is encrypted using the same KMS key as the source snapshot. | 
| Enabled | No | Automatic encryption | You can specify the KMS key to use. If you do not specify a KMS key, the key specified for encryption by default is used. | 
| Enabled | Yes, unencrypted | Automatic encryption | You can specify the KMS key to use. If you do not specify a KMS key, the key specified for encryption by default is used. | 
| Enabled | Yes, encrypted | Automatic encryption | You can specify the KMS key to use. If you do not specify a KMS key, the volume is encrypted using the same key as the source snapshot (console), or the key specified for encryption by default (CLI/API). | 

**Additional considerations**
+ Volumes must be attached to instances in the same Availability Zone.
+ Volumes are ready for use only after they enter the `available` state.
+ When you create a volume using the console, `gp3` is the default volume type. For the command line tools, API, and SDK, `gp2` is the default volume type. 
+ To use a volume with an instance running on an outpost, you must create the volume on the same outpost as the instance. 
+ If you create a volume for use with a Windows instance, and it's larger than 2048 GiB, ensure that you configure the volume to use GPT partition tables. For more information, see [Amazon EBS volume constraints](volume_constraints.md) and [ Windows support for disks larger than 2 TB.](https://learn.microsoft.com/en-us/troubleshoot/windows-server/backup-and-storage/support-for-hard-disks-exceeding-2-tb).
+ Volumes are also created indirectly by launching an Amazon EC2 instance. Either the AMI used to launch the instance, or the instance launch request itself could include block device mappings for Amazon EBS volumes. For more information, see [Block device mappings](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html).

------
#### [ Console ]

**To create a volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes** and then choose **Create volume**.

1. (*Outpost customers only*) For **Outpost ARN**, enter the ARN of the AWS Outpost on which to create the volume.

1. For **Volume type**, choose the type of volume to create. For more information about the available volume types, see [Amazon EBS volume types](ebs-volume-types.md).

1. For **Size**, enter the size of the volume, in GiB. For more information, see [Amazon EBS volume constraints](volume_constraints.md).

1. (*For `io1`, `io2`, and `gp3` only*) For **IOPS**, enter the maximum number of input/output operations per second (IOPS) that the volume should provide.

1. (*For `gp3` only*) For **Throughput**, enter the throughput that the volume should provide, in MiB/s.

1. For **Availability Zone**, choose the Availability Zone in which to create the volume.

1. For **Snapshot ID**, do one of the following:
   + To create an empty volume, keep the default value (**Don't create volume from a snapshot**).
   + To create the volume from a snapshot, select the snapshot to use.

1. If you have selected a snapshot, for **Volume initialization rate**, you can optionally specify the volume initialization rate, in MiB/s, at which the snapshot blocks are to be downloaded from Amazon S3 to the volume after creation. For more information, see [Use an Amazon EBS Provisioned Rate for Volume Initialization](initalize-volume.md#volume-initialization-rate). To use the default initialization rate or fast snapshot restore (if it is enabled for the selected snapshot), don't specify a rate.

1. (*`io1` and `io2` only*) To enable the volume for Amazon EBS Multi-Attach, select **Enable Multi-Attach**. For more information, see [Attach an EBS volume to multiple EC2 instances using Multi-Attach](ebs-volumes-multi.md).

1. Set the encryption status for the volume.
   + If your account is enabled for [encryption by default](encryption-by-default.md), encryption is automatic and can't be disabled.
   + If you selected an encrypted snapshot, encryption is automatic and can't be disabled.
   + If your account is not enabled for [encryption by default](encryption-by-default.md), and you select an unencrypted snapshot or do not select a snapshot, encryption is optional.

1. (*Optional*) To assign custom tags to the volume, in the **Tags** section, choose **Add tag**, and then enter a tag key and value pair.

1. Choose **Create volume**.

1. To use the volume, wait for it to reach the `available` state and then attach it to an Amazon EC2 instance in the same Availability Zone. For more information, see [Attach an Amazon EBS volume to an Amazon EC2 instance](ebs-attaching-volume.md).

------
#### [ AWS CLI ]

**To create a volume**  
Use the [create-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-volume.html) command. The following example creates an empty gp3 volume with a size of 100 GiB in the specified Availability Zone.

```
aws ec2 create-volume \
    --volume-type gp3 \
    --size 100 \
    --availability-zone us-east-1a
```

------
#### [ PowerShell ]

**To create a volume**  
Use the [New-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2Volume.html) cmdlet. The following example creates an empty gp3 volume with a size of 100 GiB in the specified Availability Zone.

```
New-EC2Volume `
    -VolumeType gp3 `
    -Size 100 `
    -AvailabilityZone us-east-1a
```

------

# Copy an Amazon EBS volume
Copy a volume

You can create an instant point-in-time copy of an Amazon EBS volume within the same Availability Zone. A volume copy begins as a crash-consistent, point-in-time copy of the source volume. It includes all the data blocks written to the source volume at the time the volume copy initialization begins. The volume copy gets its own unique volume ID. Volume copies are created immediately and can be attached to an Amazon EC2 instance once it reaches the `available` state. Using volume copies, you can quickly copy your production data for test and development environments.

## Initialization


Volume copies are initialized after creation. During initialization, the data blocks are copied from the source volume and written to the volume copy in the background. The volume remains in the `initializing` state until initialization completes.

**Performance during initialization**  
Copy operations do not affect the performance of the source volume. You can continue using the source volume normally during the copy process. Copied volumes can be accessed instantly without waiting for the data to be copied from the source volume. Volume copies provide instant access to data with single-digit millisecond latency, however, actual latency might vary depending on the volume type. During initialization, the volume copy delivers **baseline performance** equal to the lowest of the following three values:
+ 3,000 IOPS and 125 MiB/s
+ The provisioned performance for the **source volume**
+ The provisioned performance for the **volume copy**

The volume copy can exceed the baseline performance when the following criteria are met:

1. Both the source volume and volume copy are provisioned with more than 3,000 IOPS and 125 MiB/s.

1. The source volume has unutilized performance capacity (driven performance is less than provisioned performance).

For example, if the source volume is provisioned with 10,000 IOPS and your workload is currently driving only 5,000 IOPS, and the volume copy is provisioned with 10,000 IOPS, the volume copy can achieve performance higher than the 3,000 IOPS baseline performance during initialization by using the source volume's unutilized 5,000 IOPS.

**Initialization duration**  
The time it takes to initialize a volume copy depends on the size of the block data written to the source volume at the time of creating the volume copy. Volume copies are initialized on a best-effort basis, with the following general guidelines. For the first 1 TiB of data blocks, volume initialization takes up to 6 hours. For each subsequent 1 TiB of data blocks up to 16 TiB, initialization takes 1.2 hours per TiB. For written data larger than 16 TiB, initialization takes 24 hours.

**Monitor initialization progress**  
You can monitor the initialization progress using the [describe-volume-status](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volume-status.html) AWS CLI command or Amazon EventBridge. For more information, see [Monitor the status of Amazon EBS volume initialization](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-initialize-monitor.html).

## Encryption


Copies of encrypted volumes are automatically encrypted with the same KMS key as the source volume. You can't copy unencrypted volumes.

## Considerations

+ You can create copies from encrypted source volumes only. You can't create copies from unencrypted source volumes.
+ You can create only one volume copy from a source volume at a time. You can create subsequent copies of the same source volume only once the previous volume copy has been fully initialized.
+ You can have a maximum of 5 in-progress volume copies per Region. If you exceed this quota, you get the `CopyVolumesLimitExceeded` error. You can [request a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html) if needed.
+ The volume copy must be created in the same Availability Zone as the source volume.
+ The size of the volume copy must be equal to or greater than the size of the source volume.
+ You can't copy a volume copy while it is being created or initialized.
+ To create a volume copy, the source volume must be in the `available` or `in-use` state, and volume modifications must be in the `completed` or `optimizing` state.
+ Volume copies are subject to the same account and Regional storage and IOPS quotas as regular Amazon EBS volumes. For more information, see [Amazon EBS quotas](https://docs.aws.amazon.com/general/latest/gr/ebs-service.html#limits_ebs).
+ If you delete the source volume while the copy operation is in progress, the copy operation still completes.
+ Tags assigned to the source volume are not assigned to the volume copy.
+ You can't create copies from volumes on Outposts or in Wavelength Zones.

## Pricing


When you initiate a volume copy operation, you are charged a one-time fee per GiB of data blocks written to the volume copy. After the volume copy is created, it is charged the same way as any other Amazon EBS volume in your account. For more information, see [Amazon EBS pricing](https://aws.amazon.com/ebs/pricing/).

## Copy a volume


Use one of the following methods to copy an Amazon EBS volume.

------
#### [ Console ]

**To copy a volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volume to copy and choose **Actions**, **Copy volume**.

1. For **Volume type**, choose the volume type for the copy. The default volume type is **gp3**.

1. For **Size**, enter the size for the volume copy, in GiBs. The size must be equal to or greater than the size of the source volume.

1. (*`io1`, `io2`, and `gp3` only*) For **IOPS**, enter the maximum number of input/output operations per second (IOPS) for the volume copy.

1. (*`gp3` only*) For **Throughput**, enter the throughput for the volume copy, in MiB/s.

1. (*`io1` and `io2` only*) To enable the volume copy for Amazon EBS Multi-Attach, select **Enable Multi-Attach**.

1. (*Optional*) To assign custom tags to the volume copy, in the **Tags** section, choose **Add tag**, and then enter a tag key and value pair.

1. Choose **Copy volume**.

1. The copied volume enters the `creating` state and then transitions to `available` shortly after. You can then attach it to an Amazon EC2 instance in the same Availability Zone.

------
#### [ AWS CLI ]

**To copy a volume**  
Use the [copy-volumes](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/copy-volumes.html) command.

The following example creates a volume copy of `vol-01234567890abcdef` with the `gp3` volume type, a size of `100` GiB, and throughput of `250` MiB/s.

```
aws ec2 copy-volumes \
--source-volume-id vol-01234567890abcdef \
--volume-type gp3 \
--size 100 \
--throughput 250
```

------
#### [ PowerShell ]

**To copy a volume**  
Use the [Copy-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Copy-EC2Volume.html) cmdlet.

The following example creates a volume copy of `vol-01234567890abcdef` with the `gp3` volume type, a size of `100` GiB, and throughput of `250` MiB/s.

```
Copy-EC2Volume `
-SourceVolumeId vol-01234567890abcdef `
-VolumeType gp3 `
-Size 100 `
-Throughput 250
```

------

# Attach an Amazon EBS volume to an Amazon EC2 instance
Attach a volume to an instance

You can attach an available EBS volume to one or more of your instances that is in the same Availability Zone as the volume.

For information about adding EBS volumes to your instance at launch, see [instance block device mapping](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html#instance-block-device-mapping).

**Considerations**
+ The maximum number of Amazon EBS volumes that you can attach to an instance depends on the instance type. If you exceed the volume attachment limit for an instance type, the attachment request fails with the `AttachmentLimitExceeded` error. For more information, see [Instance volume limits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html).
+ You can attach volumes to instances that are in the same Availability Zone only.
+ Multi-Attach enabled volumes can be attached to up to 16 instances. For more information, see [Attach an EBS volume to multiple EC2 instances using Multi-Attach](ebs-volumes-multi.md).
+ If the volume has an AWS Marketplace product code:
  + You can attach it to a stopped instance only.
  + You must be subscribed to the AWS Marketplace code that is on the volume.
  + The instance's configuration, such as its type and operating system, must support that specific AWS Marketplace code. For example, you cannot take a volume from a Windows instance and attach it to a Linux instance.
  + AWS Marketplace codes are copied from the volume to the instance.
+ This device name you specify is used by Amazon EC2. The block device driver can mount the device with a device name that is different from the one you specify. For more information, see [Device names for volumes on Amazon EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html).
+ In some cases, a volume other than the volume attached to `/dev/xvda` or `/dev/sda` can become the root volume for the instance. This can happen if you attach the root volume of another instance, or a volume created from the snapshot of a root volume, to an instance with an existing root volume. For more information, see [ Boot from the wrong volume](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-booting-from-wrong-volume.html).
+ Some instance types support more than one EBS card. You can select the EBS card for the volume to be attached to by specifying the EBS card index. For instances support multiple EBS cards, see [ EBS cards](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs_cards.html).
  + Your root volume must be attached to EBS card index `0`.
  + For the instances that support multiple EBS cards, If you do not specify the EBS card index, your volume will be attached to EBS card index `0`.
  + When configuring your EC2 instances for high-performance workloads, it is essential to balance EBS volumes across EBS cards based on performance requirements, to avoid running into performance limits on any of the EBS cards.
  + The volume attachment limit for an instance type is spread equally across each EBS card. For example, on an EC2 instance that supports `128` volume attachments with 2 EBS cards, each EBS card can support up to `64` volume attachments. If you exceed the EBS card attachment limit, the request fails with the `CardAttachmentLimitExceeded` error.

------
#### [ Console ]

**To attach an EBS volume to an instance**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volume to attach and choose **Actions**, **Attach volume**.

1. For **Instance**, enter the ID of the instance or select the instance from the list of options.

1. For **Device name**, do one of the following:
   + For a root volume, select the required device name from the **Reserved for root volume** section of the list. Typically `/dev/sda1` or `/dev/xvda` for Linux instances depending on the AMI, or `/dev/sda1` for Windows instances.
   + For data volumes, select an available device name from the **Recommended for data volumes** section of the list.
   + To use a custom device name, select **Specify a custom device name** and then enter the device name to use.

1. Choose **Attach volume**.

1. Connect to the instance and mount the volume. For more information, see [Make an Amazon EBS volume available for use](ebs-using-volumes.md).

------
#### [ AWS CLI ]

**To attach an EBS volume to an instance**  
Use the [attach-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/attach-volume.html) command. The following example attaches the specified volume to the specified instance using the specified device name.

```
aws ec2 attach-volume \
    --volume-id vol-01234567890abcdef \
    --instance-id i-1234567890abcdef0 \
    --device /dev/sdf
```

------
#### [ PowerShell ]

**To attach an EBS volume to an instance**  
Use the [Add-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Add-EC2Volume.html) cmdlet. The following example attaches the specified volume to the specified instance using the specified device name.

```
Add-EC2Volume `
    -VolumeId vol-01234567890abcdef `
    -InstanceId i-1234567890abcdef0 `
    -Device /dev/sdf
```

------

# Attach an EBS volume to multiple EC2 instances using Multi-Attach
Attach a volume to multiple instances

Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (`io1` or `io2`) volume to multiple instances that are in the same Availability Zone. You can attach multiple Multi-Attach enabled volumes to an instance or set of instances. Each instance to which the volume is attached has full read and write permission to the shared volume. Multi-Attach makes it easier for you to achieve higher application availability in applications that manage concurrent write operations.

**Pricing and billing**  
There are no additional charges for using Amazon EBS Multi-Attach. You are billed the standard charges that apply to Provisioned IOPS SSD (`io1` and `io2`) volumes. For more information, see [Amazon EBS pricing](https://aws.amazon.com/ebs/pricing/).

**Topics**
+ [

## Considerations and limitations
](#considerations)
+ [Performance for Multi-Attach volumes](ebs-multi-attach-perf.md)
+ [Enable Multi-Attach](working-with-multi-attach.md)
+ [Disable Multi-Attach](disable-multi-attach.md)
+ [NVMe reservations](nvme-reservations.md)

## Considerations and limitations

+ Multi-Attach enabled volumes can be attached to up to 16 instances built on the [ Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) that are in the same Availability Zone.
+ **Linux instances** support Multi-Attach enabled `io1` and `io2` volumes. **Windows instances** support Multi-Attach enabled `io2` volumes only.
+ The maximum number of Amazon EBS volumes that you can attach to an instance depends on the instance type and instance size. For more information, see [ instance volume limits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html).
+ Multi-Attach is supported exclusively on [Provisioned IOPS SSD (`io1` and `io2`) volumes](provisioned-iops.md#EBSVolumeTypes_piops).
+ Multi-Attach for `io1` volumes is available in the following Regions only: US East (N. Virginia), US West (Oregon), and Asia Pacific (Seoul).

  Multi-Attach for `io2` is available in all Regions that support `io2`.
**Note**  
For better performance, consistency, and durability at a lower cost, we recommend that you use `io2` volumes.
+ `io1` volumes with Multi-Attach enabled are not supported with [instances built on the Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) that support the Scalable Reliable Datagram (SRD) networking protocol only. To use Multi-Attach with these instance types, you must use `io2`.
+ Standard file systems, such as XFS and EXT4, are not designed to be accessed simultaneously by multiple servers, such as EC2 instances. You should use a clustered file system to ensure data resiliency and reliability for your production workloads.
+ Multi-Attach enabled `io2` volumes support I/O fencing. I/O fencing protocols control write access in a shared storage environment to maintain data consistency. Your applications must provide write ordering for the attached instances to maintain data consistency. For more information, see [Use NVMe reservations with Multi-Attach enabled Amazon EBS volumes](nvme-reservations.md).

  Multi-Attach enabled `io1` volumes do not support I/O fencing.
+ Multi-Attach enabled volumes can't be created as boot volumes.
+ Multi-Attach enabled volumes can be attached to one block device mapping per instance.
+ Multi-Attach can't be enabled during instance launch using either the Amazon EC2 console or RunInstances API.
+ Multi-Attach enabled volumes that have an issue at the Amazon EBS infrastructure layer are unavailable to all attached instances. Issues at the Amazon EC2 or networking layer might impact only some attached instances.
+ The following table shows volume modification support for Multi-Attach enabled `io1` and `io2` volumes after creation.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volumes-multi.html)

  \$1 You can't enable or disable Multi-Attach while the volume is attached to an instance.
+ Multi-Attach enabled volumes are deleted on instance termination if the last attached instance is terminated and if that instance is configured to delete the volume on termination. If the volume is attached to multiple instances that have different delete on termination settings in their volume block device mappings, the last attached instance's block device mapping setting determines the delete on termination behavior.

  To ensure predictable delete on termination behavior, enable or disable delete on termination for all of the instances to which the volume is attached. For more information, see [ Preserve data when an instance is terminated](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/preserving-volumes-on-termination.html).
+ You can monitor a Multi-Attach enabled volume using the CloudWatch Metrics for Amazon EBS volumes. Data is aggregated across all of the attached instances. You can't monitor metrics for individual attached instances. For more information, see [Amazon CloudWatch metrics for Amazon EBS](using_cloudwatch_ebs.md).

# Performance for Multi-Attach Amazon EBS volumes
Performance for Multi-Attach volumes

Each attached instance is able to drive its maximum IOPS performance up to the volume's maximum provisioned performance. However, the aggregate performance of all of the attached instances can't exceed the volume's maximum provisioned performance. If the attached instances' demand for IOPS is higher than the volume's Provisioned IOPS, the volume will not exceed its provisioned performance.

For example, say you create an `io2` Multi-Attach enabled volume with `80,000` provisioned IOPS and you attach it to an `m7g.large` instance that supports up to `40,000` IOPS, and an ` r7g.12xlarge` instance that supports up to `60,000` IOPS. Each instance can drive its maximum IOPS as it is less than the volume's Provisioned IOPS of `80,000`. However, if both instances drive I/O to the volume simultaneously, their combined IOPS can't exceed the volume's provisioned performance of `80,000` IOPS. 

To achieve consistent performance, it is best practice to balance I/O driven from attached instances across the sectors of a Multi-Attach enabled volume.

For more information about IOPS performance for the Amazon EC2 instance types, see [ Amazon EBS optimized instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the *Amazon EC2 User Guide*.

# Enable Multi-Attach for an Amazon EBS volume
Enable Multi-Attach

Multi-Attach enabled volumes can be managed in much the same way that you would manage any other Amazon EBS volume. However, in order to use the Multi-Attach functionality, you must enable it for the volume.

When you create a new volume, Multi-Attach is disabled by default. You can enable Multi-Attach when you create a volume.

You can also enable Multi-Attach for `io2` volumes after creation, but only if they are not attached to any instances. You can't enable Multi-Attach for `io1` volumes after creation.

After you enable Multi-Attach for a volume, you can attach the volume to an instance in the same way that you attach any other EBS volume. For more information, see [Attach an Amazon EBS volume to an Amazon EC2 instance](ebs-attaching-volume.md).

------
#### [ Console ]

**To enable Multi-Attach during volume creation**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Choose **Create volume**.

1. For **Volume type**, choose **Provisioned IOPS SSD (`io1`)** or **Provisioned IOPS SSD (`io2`)**.

1. For **Size** and **IOPS**, choose the required volume size and the number of IOPS to provision.

1. For **Availability Zone**, choose the same Availability Zone that the instances are in.

1. For **Amazon EBS Multi-Attach**, choose **Enable Multi-Attach**.

1. (Optional) For **Snapshot ID**, choose the snapshot from which to create the volume.

1. Set the encryption status for the volume.

   If the selected snapshot is encrypted, or if your account is enabled for [encryption by default](encryption-by-default.md), then encryption is automatically enabled and you can't disable it. You can choose the KMS key to use to encrypt the volume.

   If the selected snapshot is unencrypted and your account is not enabled for encryption by default, encryption is optional. To encrypt the volume, for **Encryption**, choose **Encrypt this volume** and then select the KMS key to use to encrypt the volume.

   You can attach encrypted volumes only to instances that support Amazon EBS encryption. For more information, see [Amazon EBS encryption](ebs-encryption.md).

1. (Optional) To assign custom tags to the volume, in the **Tags** section, choose **Add tag**, and then enter a tag key and value pair. 

1. Choose **Create volume**.

**To enable Multi-Attach after creation**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volume and choose **Actions**, **Modify volume**.

1. For **Amazon EBS Multi-Attach**, choose **Enable Multi-Attach**.

1. Choose **Modify**.

------
#### [ AWS CLI ]

**To enable Multi-Attach during volume creation**  
Use the [create-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-volume.html) command with the `--multi-attach-enabled` option.

```
aws ec2 create-volume \
    --volume-type io2 \
    --multi-attach-enabled \
    --size 100 \
    --iops 2000 \
    --region us-west-2 \
    --availability-zone us-west-2b
```

**To enable Multi-Attach after creation**  
Use the [modify-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-volume.html) command with the `--multi-attach-enabled` option.

```
aws ec2 modify-volume \
    --volume-id vol-01234567890abcdef \
    --multi-attach-enabled
```

------
#### [ PowerShell ]

**To enable Multi-Attach during volume creation**  
Use the [New-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2Volume.html) cmdlet with the `-MultiAttachEnabled` parameter.

```
New-EC2Volume `
    -VolumeType io2 `
    -MultiAttachEnabled $true `
    -Size 100 `
    -Iops 2000 `
    -Region us-west-2 `
    -AvailabilityZone us-west-2b
```

**To enable Multi-Attach after creation**  
Use the [Edit-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Edit-EC2Volume.html) cmdlet with the `-MultiAttachEnabled` parameter.

```
Edit-EC2Volume `
    -VolumeId vol-01234567890abcdef `
    -MultiAttachEnabled $true
```

------

# Disable Multi-Attach for an Amazon EBS volume
Disable Multi-Attach

You can disable Multi-Attach for an `io2` volume only if it is attached to no more than one instance.

You can't disable Multi-Attach for `io1` volumes after creation.

------
#### [ Console ]

**To disable Multi-Attach after creation**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volume and choose **Actions**, **Modify volume**.

1. For **Amazon EBS Multi-Attach**, clear **Enable Multi-Attach**.

1. Choose **Modify**.

------
#### [ AWS CLI ]

**To disable Multi-Attach after creation**  
Use the [modify-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-volume.html) command with the `-no-multi-attach-enabled` option.

```
aws ec2 modify-volume \
    --volume-id vol-01234567890abcdef \
    --no-multi-attach-enabled
```

------
#### [ PowerShell ]

**To disable Multi-Attach after creation**  
Use the [Edit-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Edit-EC2Volume.html) cmdlet with the `-MultiAttachEnabled` parameter.

```
Edit-EC2Volume `
    -VolumeId vol-01234567890abcdef `
    -MultiAttachEnabled $false
```

------

# Use NVMe reservations with Multi-Attach enabled Amazon EBS volumes
NVMe reservations

Multi-Attach enabled `io2` volumes support NVMe reservations, which is a set of industry-standard storage fencing protocols. These protocols enable you to create and manage reservations that control and coordinate access from multiple instances to a shared volume. Reservations are used by shared storage applications to ensure data consistency.

**Topics**
+ [

## Requirements
](#nvme-reservations-reqs)
+ [

## Enabling support for NVMe reservations
](#nvme-reservations-enable)
+ [

## Supported NVMe Reservation commands
](#nvme-reservations-commands)
+ [

## Pricing
](#nvme-reservations-cost)

## Requirements


NVMe reservations is supported with Multi-Attach enabled `io2` volumes only. Multi-Attach enabled volumes can be attached only to instances built on the Nitro system.

NVMe reservations is supported with the following operating systems:
+ SUSE Linux Enterprise 12 SP3 and later
+ RHEL 8.3 and later
+ Amazon Linux 2 and later
+ Windows Server 2016 and later

**Note**  
For supported Windows Server AMIs dated 2023.09.13 and later, the required NVMe drivers are included. For earlier AMIs, you must update to NVMe driver version 1.5.0 or later. For more information, see [AWS NVMe drivers](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/aws-nvme-drivers.html).

If you're using EC2Launch v2 to initialize your disks, you must upgrade to version **2.0.1521** or later. For more information, see [Use the EC2Launch v2 agent](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2launch-v2.html).

## Enabling support for NVMe reservations


Support for NVMe reservations is enabled by default for all Multi-Attach enabled `io2` volumes created after **September 18, 2023**.

To enable support for NVMe reservations for existing `io2` volumes created before September 18, 2023, you must detach all instances from the volume and then reattach the required instances. All attachments made after detaching all of the instances will have NVMe reservations enabled.

## Supported NVMe Reservation commands


Amazon EBS supports the following NVMe Reservation commands:

**Reservation Register**  
Registers, unregisters, or replaces a reservation key. A registration key is used to identify and authenticate an instance. Registering a reservation key with a volume creates an association between the instance and the volume. You must register the instance with the volume before that instance can acquire a reservation.

**Reservation Acquire**  
Acquires a reservation on a volume, preempts a reservation held on a namespace, and aborts a reservation held on a volume. The following reservation types can be acquired:  
+ Write Exclusive Reservation
+ Exclusive Access Reservation
+ Write Exclusive - Registrants Only Reservation
+ Exclusive Access - Registrants Only Reservation
+ Write Exclusive - All Registrants Reservation
+ Exclusive Access - All Registrants Reservation

**Reservation Release**  
Releases or clears a reservation held on a volume.

**Reservation Report**  
Describes the registration and reservation status of a volume.

## Pricing


There are no additional costs for enabling and using Multi-Attach.

# Make an Amazon EBS volume available for use
Make a volume available for use

After you attach an Amazon EBS volume to your instance it is exposed as a block device. You can format the volume with any file system and then mount it. After you make the EBS volume available for use, you can access it in the same ways that you access any other volume. Any data written to this file system is written to the EBS volume and is transparent to applications using the device.

You can take snapshots of your EBS volume for backup purposes or to use as a baseline when you create another volume. For more information, see [Amazon EBS snapshots](ebs-snapshots.md).

If the EBS volume you are preparing for use is greater than 2 TiB, you must use a GPT partitioning scheme to access the entire volume. For more information, see [Amazon EBS volume constraints](volume_constraints.md).

## Linux instances


### Format and mount an attached volume


Suppose that you have an EC2 instance with an EBS volume for the root device, `/dev/xvda`, and that you have just attached an empty EBS volume to the instance using `/dev/sdf`. Use the following procedure to make the newly attached volume available for use.

**To format and mount an EBS volume on Linux**

1. Connect to your instance using SSH. For more information, see [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html).

1. The device could be attached to the instance with a different device name than you specified in the block device mapping. For more information, see [device names on Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html). Use the **lsblk** command to view your available disk devices and their mount points (if applicable) to help you determine the correct device name to use. The output of **lsblk** removes the `/dev/` prefix from full device paths.

   The following is example output for an instance built on the [ Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html), which exposes EBS volumes as NVMe block devices. The root device is `/dev/nvme0n1`, which has two partitions named `nvme0n1p1` and `nvme0n1p128`. The attached volume is `/dev/nvme1n1`, which has no partitions and is not yet mounted.

   ```
   [ec2-user ~]$ lsblk
   NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
   nvme1n1       259:0    0  10G  0 disk
   nvme0n1       259:1    0   8G  0 disk
   -nvme0n1p1    259:2    0   8G  0 part /
   -nvme0n1p128  259:3    0   1M  0 part
   ```

   The following is example output for a T2 instance. The root device is `/dev/xvda`, which has one partition named `xvda1`. The attached volume is `/dev/xvdf`, which has no partitions and is not yet mounted.

   ```
   [ec2-user ~]$ lsblk
   NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
   xvda    202:0    0    8G  0 disk
   -xvda1  202:1    0    8G  0 part /
   xvdf    202:80   0   10G  0 disk
   ```

1. Determine whether there is a file system on the volume. New volumes are raw block devices, and you must create a file system on them before you can mount and use them. Volumes that were created from snapshots likely have a file system on them already; if you create a new file system on top of an existing file system, the operation overwrites your data.

   Use one or both of the following methods to determine whether there is a file system on the volume:
   + Use the **file -s** command to get information about a specific device, such as its file system type. If the output shows simply `data`, as in the following example output, there is no file system on the device

     ```
     [ec2-user ~]$ sudo file -s /dev/xvdf
     /dev/xvdf: data
     ```

     If the device has a file system, the command shows information about the file system type. For example, the following output shows a root device with the XFS file system.

     ```
     [ec2-user ~]$ sudo file -s /dev/xvda1
     /dev/xvda1: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
     ```
   + Use the **lsblk -f** command to get information about all of the devices attached to the instance.

     ```
     [ec2-user ~]$ sudo lsblk -f
     ```

     For example, the following output shows that there are three devices attached to the instances—`nvme1n1`, `nvme0n1`, and `nvme2n1`. The first column lists the devices and their partitions. The `FSTYPE` column shows the file system type for each device. If the column is empty for a specific device, it means that the device does not have a file system. In this case, device `nvme1n1` and partition `nvme0n1p1` on device `nvme0n1` are both formatted using the XFS file system, while device `nvme2n1` and partition `nvme0n1p128` on device `nvme0n1` do not have file systems.

     ```
     NAME		FSTYPE	LABEL	UUID						MOUNTPOINT
     nvme1n1	        xfs		7f939f28-6dcc-4315-8c42-6806080b94dd
     nvme0n1
     ├─nvme0n1p1	xfs	    /	90e29211-2de8-4967-b0fb-16f51a6e464c	        /
     └─nvme0n1p128
     nvme2n1
     ```

   If the output from these commands show that there is no file system on the device, you must create one.

1. <a name="create_file_system_step"></a>(Conditional) If you discovered that there is a file system on the device in the previous step, skip this step. If you have an empty volume, use the **mkfs -t** command to create a file system on the volume.
**Warning**  
Do not use this command if you're mounting a volume that already has data on it (for example, a volume that was created from a snapshot). Otherwise, you'll format the volume and delete the existing data.

   ```
   [ec2-user ~]$ sudo mkfs -t xfs /dev/xvdf
   ```

   If you get an error that `mkfs.xfs` is not found, use the following command to install the XFS tools and then repeat the previous command:

   ```
   [ec2-user ~]$ sudo yum install xfsprogs
   ```

1. Use the **mkdir** command to create a mount point directory for the volume. The mount point is where the volume is located in the file system tree and where you read and write files to after you mount the volume. The following example creates a directory named `/data`.

   ```
   [ec2-user ~]$ sudo mkdir /data
   ```

1. Mount the volume or partition at the mount point directory you created in the previous step.

   If the volume has no partitions, use the following command and specify the device name to mount the entire volume.

   ```
   [ec2-user ~]$ sudo mount /dev/xvdf /data
   ```

   If the volume has partitions, use the following command and specify the partition name to mount a partition.

   ```
   [ec2-user ~]$ sudo mount /dev/xvdf1 /data
   ```

1. Review the file permissions of your new volume mount to make sure that your users and applications can write to the volume. For more information about file permissions, see [File security](https://tldp.org/LDP/intro-linux/html/sect_03_04.html) at *The Linux Documentation Project*.

1. The mount point is not automatically preserved after rebooting your instance. To automatically mount this EBS volume after reboot, follow the next procedure.

### Automatically mount an attached volume after reboot


To mount an attached EBS volume on every system reboot, add an entry for the device to the `/etc/fstab` file.

You can use the device name, such as `/dev/xvdf`, in `/etc/fstab`, but we recommend using the device's 128-bit universally unique identifier (UUID) instead. Device names can change, but the UUID persists throughout the life of the partition. By using the UUID, you reduce the chances that the system becomes unbootable after a hardware reconfiguration. For more information, see [Map Amazon EBS volumes to NVMe device names](identify-nvme-ebs-device.md).

**To mount an attached volume automatically after reboot**

1. (Optional) Create a backup of your `/etc/fstab` file that you can use if you accidentally destroy or delete this file while editing it.

   ```
   [ec2-user ~]$ sudo cp /etc/fstab /etc/fstab.orig
   ```

1. Use the **blkid** command to find the UUID of the device. Make a note of the UUID of the device that you want to mount after reboot. You'll need it in the following step.

   For example, the following command shows that there are two devices mounted to the instance, and it shows the UUIDs for both devices.

   ```
   [ec2-user ~]$ sudo blkid
   /dev/xvda1: LABEL="/" UUID="ca774df7-756d-4261-a3f1-76038323e572" TYPE="xfs" PARTLABEL="Linux" PARTUUID="02dcd367-e87c-4f2e-9a72-a3cf8f299c10"
   /dev/xvdf: UUID="aebf131c-6957-451e-8d34-ec978d9581ae" TYPE="xfs"
   ```

   For Ubuntu 18.04 use the lsblk command.

   ```
   [ec2-user ~]$ sudo lsblk -o +UUID
   ```

1. Open the `/etc/fstab` file using any text editor, such as **nano** or **vim**.

   ```
   [ec2-user ~]$ sudo vim /etc/fstab
   ```

1. Add the following entry to `/etc/fstab` to mount the device at the specified mount point. The fields are the UUID value returned by **blkid** (or **lsblk** for Ubuntu 18.04), the mount point, the file system, and the recommended file system mount options. For more information about the required fields, run `man fstab` to open the **fstab** manual.

   In the following example, we mount the device with UUID `aebf131c-6957-451e-8d34-ec978d9581ae` to mount point `/data` and we use the `xfs` file system. We also use the `defaults` and `nofail` flags. We specify `0` to prevent the file system from being dumped, and we specify `2` to indicate that it is a non-root device.

   ```
   UUID=aebf131c-6957-451e-8d34-ec978d9581ae  /data  xfs  defaults,nofail  0  2
   ```
**Note**  
If you ever boot your instance without this volume attached (for example, after moving the volume to another instance), the `nofail` mount option enables the instance to boot even if there are errors mounting the volume. Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the `nobootwait` mount option.

1. To verify that your entry works, run the following commands to unmount the device and then mount all file systems in `/etc/fstab`. If there are no errors, the `/etc/fstab` file is OK and your file system will mount automatically after it is rebooted.

   ```
   [ec2-user ~]$ sudo umount /data
   [ec2-user ~]$ sudo mount -a
   ```

   If you receive an error message, address the errors in the file.
**Warning**  
Errors in the `/etc/fstab` file can render a system unbootable. Do not shut down a system that has errors in the `/etc/fstab` file.

   If you are unsure how to correct errors in `/etc/fstab` and you created a backup file in the first step of this procedure, you can restore from your backup file using the following command.

   ```
   [ec2-user ~]$ sudo mv /etc/fstab.orig /etc/fstab
   ```

## Windows instances


Use one of the following methods to make a volume available on a Windows instance.

------
#### [ PowerShell ]

**To make all EBS volumes with raw partitions available to use with Windows PowerShell**

1. Log in to your Windows instance using Remote Desktop. For more information, see [ Connect to your Windows instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connecting_to_windows_instance.html).

1. On the taskbar, open the Start menu, and choose **Windows PowerShell**.

1. Use the provided series of Windows PowerShell commands within the opened PowerShell prompt. The script performs the following actions by default:

   1. Stops the ShellHWDetection service.

   1. Enumerates disks where the partition style is raw.

   1. Creates a new partition that spans the maximum size the disk and partition type will support.

   1. Assigns an available drive letter.

   1. Formats the file system as NTFS with the specified file system label.

   1. Starts the ShellHWDetection service again.

   ```
   Stop-Service -Name ShellHWDetection
   Get-Disk | Where PartitionStyle -eq 'raw' | Initialize-Disk -PartitionStyle MBR -PassThru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel "Volume Label" -Confirm:$false
   Start-Service -Name ShellHWDetection
   ```

------
#### [ DiskPart command line tool ]

**To make an EBS volume available to use with the DiskPart command line tool**

1. Log in to your Windows instance using Remote Desktop. For more information, see [ Connect to your Windows instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connecting_to_windows_instance.html).

1. Determine the disk number that you want to make available:

   1. Open the Start menu, and select Windows PowerShell.

   1. Use the `Get-Disk` Cmdlet to retrieve a list of available disks.

   1. In the command output, note the **Number** corresponding to the disk that you're making available.

1. Create a script file to execute DiskPart commands:

   1. Open the Start menu, and select **File Explorer**.

   1. Navigate to a directory, such as C:\$1, to store the script file.

   1. Choose or right-click an empty space within the folder to open the dialog box, position the cursor over **New** to access the context menu, and then choose **Text Document**.

   1. Name the text file `diskpart.txt`.

1. Add the following commands to the script file. You may need to modify the disk number, partition type, volume label, and drive letter. The script performs the following actions by default:

   1. Selects disk 1 for modification.

   1. Configures the volume to use the master boot record (MBR) partition structure.

   1. Formats the volume as an NTFS volume.

   1. Sets the volume label.

   1. Assigns the volume a drive letter.
**Warning**  
If you're mounting a volume that already has data on it, do not reformat the volume or you will delete the existing data.

   ```
   select disk 1 
   attributes disk clear readonly 
   online disk noerr
   convert mbr 
   create partition primary 
   format quick fs=ntfs label="volume_label" 
   assign letter="drive_letter"
   ```

   For more information, see [DiskPart Syntax and Parameters](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-vista/cc766465(v=ws.10)#diskpart-syntax-and-parameters).

1. Open a command prompt, navigate to the folder in which the script is located, and run the following command to make a volume available for use on the specified disk:

   ```
   C:\> diskpart /s diskpart.txt
   ```

------
#### [ Disk Management utility ]

**To make an EBS volume available to use with the Disk Management utility**

1. Log in to your Windows instance using Remote Desktop. For more information, see [ Connect to your Windows instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connecting_to_windows_instance.html).

1. Start the Disk Management utility. On the taskbar, open the context (right-click) menu for the Windows logo, and choose **Disk Management**.
**Note**  
In Windows Server 2008, choose **Start**, **Administrative Tools**, **Computer Management**, **Disk Management**.

1. Bring the volume online. In the lower pane, open the context (right-click) menu for the left panel for the disk for the EBS volume. Choose **Online**.  
![\[Bring the volume online.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-volume-online.png)

1. (Conditional) If the disk is not initialized, you must initialize it before you can use it. If the disk is already initialized, skip this step.
**Warning**  
If you're mounting a volume that already has data on it (for example, a public data set, or a volume that you created from a snapshot), do not reformat the volume or you will delete the existing data.

   If the disk is not initialized, initialize it as follows:

   1. Open the context (right-click) menu for the left panel for the disk, and choose **Initialize Disk**.  
![\[Initialize the volume.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-volume-initialize.png)

   1. In the **Initialize Disk** dialog box, select a partition style, and choose **OK**.  
![\[Initialize volume settings.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-volume-initialize-settings.png)

1. Open the context (right-click) menu for the right panel for the disk, and choose **New Simple Volume**.  
![\[Mount a simple volume.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-new-simple-volume.png)

1. In the **New Simple Volume Wizard**, choose **Next**.  
![\[Begin the New Simple Volume Wizard.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-new-simple-volume-wizard-welcome.png)

1. If you want to change the default maximum value, specify the **Simple volume size in MB**, and then choose **Next.**  
![\[Specify the volume size.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-new-simple-volume-wizard-size.png)

1. Specify a preferred drive letter, if necessary, within the **Assign the following drive letter** dropdown, and then choose **Next.**  
![\[Specify a drive letter.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-new-simple-volume-wizard-letter.png)

1. Specify a **Volume Label** and adjust the default settings as necessary, and then choose **Next.**  
![\[Specify settings to format the volume.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-new-simple-volume-wizard-format.png)

1. Review your settings, and then choose **Finish** to apply the modifications and close the New Simple Volume wizard.  
![\[Review your settings and finish the wizard.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/windows-2016-new-simple-volume-wizard-finish.png)

------

# View information about an Amazon EBS volume
View volume details

You can view descriptive information about your EBS volumes. For example, you can view information about all volumes in a specific Region or view detailed information about a single volume, including its size, volume type, whether the volume is encrypted, which KMS key was used to encrypt the volume, and the specific instance to which the volume is attached.

You can get additional information about your EBS volumes, such as how much disk space is available, from the operating system on the instance.

**Topics**
+ [

## View volume information
](#ebs-view-information-console)
+ [

## Volume states
](#volume-state)
+ [

## View volume metrics
](#ebs-view-volume-metrics)
+ [

## View free disk space
](#ebs-view-free-disk-space-lin)

## View volume information


You can view information about your EBS volumes.

------
#### [ Console ]

**To view information about a volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**. 

1. To reduce the list, you can filter your volumes using tags and volume attributes. Choose the filter field, select a tag or volume attribute, and then select the filter value.

1. To view more information about a volume, choose its ID.

**To view the EBS volumes that are attached to an instance**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**.

1. Select the instance.

1. On the **Storage** tab, the **Block devices** section lists the volumes that are attached to the instance. To view information about a specific volume, choose its ID in the **Volume ID** column.

------
#### [ Amazon EC2 Global View ]

You can use [Amazon EC2 Global View](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/global-view.html) to view your volumes across all Regions for which your AWS account is enabled.

**To get a summary of your EBS volumes across all Regions**

1. Open the Amazon EC2 Global View console at [https://console.aws.amazon.com/ec2globalview/home](https://console.aws.amazon.com/ec2globalview/home).

1. On the **Region explorer** tab, under **Summary**, check the resource count for **Volumes**, which includes the number of volumes and the number of Regions. Click the underlined text to see how the volume count is spread across Regions.

1. On the **Global search** tab, select the client filter **Resource type = Volume**. You can filter the results further by specifying a Region or a tag.

------
#### [ AWS CLI ]

**To view information about an EBS volume**  
Use the [describe-volumes](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volumes.html) command. The following example counts the volumes in the current Region.

```
aws ec2 describe-volumes --query "length(Volumes[*])"
```

The following example lists the volumes attached to the specified instance.

```
aws ec2 describe-volumes \
    --filters "Name=attachment.instance-id,Values=i-1234567890abcdef0" \
    --query Volumes[*].VolumeId \
    --output text
```

The following example describes the specified volume.

```
aws ec2 describe-volumes --volume-ids vol-01234567890abcdef
```

The following is example output.

```
{
    "Volumes": [
        {
            "Iops": 3000,
            "VolumeType": "gp3",
            "MultiAttachEnabled": false,
            "Throughput": 125,
            "Operator": {
                "Managed": false
            },
            "VolumeId": "vol-01234567890abcdef",
            "Size": 8,
            "SnapshotId": "snap-0abcdef1234567890",
            "AvailabilityZone": "us-west-2b",
            "State": "in-use",
            "CreateTime": "2024-05-17T23:23:00.400000+00:00",
            "Attachments": [
                {
                    "DeleteOnTermination": true,
                    "VolumeId": "vol-01234567890abcdef",
                    "InstanceId": "i-1234567890abcdef0",
                    "Device": "/dev/xvda",
                    "State": "attached",
                    "AttachTime": "2024-05-17T23:23:00+00:00"
                }
            ],
            "Encrypted": false
        }
    ]
}
```

------
#### [ PowerShell ]

**To view information about an EBS volume**  
Use the [Get-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2Volume.html) cmdlet. The following example counts the volumes in the current Region.

```
(Get-EC2Volume).Count
```

The following example lists the volumes attached to the specified instance.

```
(Get-EC2Volume `
    -Filters @{Name="attachment.instance-id";Values="i-1234567890abcdef0"}).VolumeId
```

The following example describes the specified volume.

```
Get-EC2Volume -VolumeId vol-01234567890abcdef
```

The following is example output.

```
Attachments        : {i-1234567890abcdef0}
AvailabilityZone   : us-west-2b
CreateTime         : 5/17/2024 11:23:00 PM
Encrypted          : False
FastRestored       : False
Iops               : 3000
KmsKeyId           : 
MultiAttachEnabled : False
Operator           : Amazon.EC2.Model.OperatorResponse
OutpostArn         : 
Size               : 8
SnapshotId         : snap-0abcdef1234567890
SseType            : 
State              : in-use
Tags               : {}
Throughput         : 125
VolumeId           : vol-01234567890abcdef
VolumeType         : gp3
```

------

## Volume states


Volume state describes the availability of an Amazon EBS volume. You can view the volume state in the **State** column on the **Volumes** page in the console, or by using the [describe-volumes](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volumes.html) AWS CLI command.

An Amazon EBS volume transitions through different states from the moment it is created until it is deleted.

The following illustration shows the transitions between volume states. You can create a volume from an Amazon EBS snapshot or create an empty volume. When you create a volume, it enters the `creating` state. After the volume is ready for use, it enters the `available` state. You can attach an available volume to an instance in the same Availability Zone as the volume. You must detach the volume before you can attach it to a different instance or delete it. You can delete a volume when you no longer need it.

![\[The lifecycle of an EBS volume.\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/volume-states.png)


The following table summarizes the volume states.


| State | Description | 
| --- | --- | 
| creating | The volume is being created. | 
| available | The volume is not attached to an instance. | 
| in-use | The volume is attached to an instance. | 
| deleting | The volume is being deleted. | 
| deleted | The volume is deleted. | 
| error | The underlying hardware related to your EBS volume has failed, and the data associated with the volume is unrecoverable. For information about how to restore the volume or recover the data on the volume, see [Why does my EBS volume have a status of "error"?](https://repost.aws/knowledge-center/ebs-error-status). | 

## View volume metrics


You can get additional information about your EBS volumes from Amazon CloudWatch. For more information, see [Amazon CloudWatch metrics for Amazon EBS](using_cloudwatch_ebs.md).

## View free disk space


You can get additional information about your EBS volumes, such as how much disk space is available, from the operating system on the instance.

### Linux instances


Use the **df -hT** command and specify the device name:

```
[ec2-user ~]$ df -hT /dev/xvda1
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/xvda1     xfs       8.0G  1.2G  6.9G  15% /
```

### Windows instances


You can view the free disk space by opening File Explorer and selecting **This PC**.

You can also view the free disk space using the following `dir` command and examining the last line of the output:

```
C:\> dir C:
 Volume in drive C has no label.
 Volume Serial Number is 68C3-8081

 Directory of C:\

03/25/2018  02:10 AM    <DIR>          .
03/25/2018  02:10 AM    <DIR>          ..
03/25/2018  03:47 AM    <DIR>          Contacts
03/25/2018  03:47 AM    <DIR>          Desktop
03/25/2018  03:47 AM    <DIR>          Documents
03/25/2018  03:47 AM    <DIR>          Downloads
03/25/2018  03:47 AM    <DIR>          Favorites
03/25/2018  03:47 AM    <DIR>          Links
03/25/2018  03:47 AM    <DIR>          Music
03/25/2018  03:47 AM    <DIR>          Pictures
03/25/2018  03:47 AM    <DIR>          Saved Games
03/25/2018  03:47 AM    <DIR>          Searches
03/25/2018  03:47 AM    <DIR>          Videos
               0 File(s)              0 bytes
              13 Dir(s)  18,113,662,976 bytes free
```

You can also view the free disk space using the following `fsutil` command:

```
C:\> fsutil volume diskfree C:
Total # of free bytes        : 18113204224
Total # of bytes             : 32210153472
Total # of avail free bytes  : 18113204224
```

**Tip**  
You can also use the CloudWatch agent to collect disk space usage metrics from an Amazon EC2 instance without connecting to the instance. For more information, see [ Create the CloudWatch agent configuration file](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file.html) and [Installing the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html) in the *Amazon CloudWatch User Guide*. If you need to monitor disk space usage for multiple instances, you can install and configure the CloudWatch agent on those instances using Systems Manager. For more information, see [ Installing the CloudWatch agent using Systems Manager](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/installing-cloudwatch-agent-ssm.html).

# Modify an Amazon EBS volume using Elastic Volumes operations
Modify a volume

With Amazon EBS Elastic Volumes, you can increase the volume size, change the volume type, or adjust the performance of your EBS volumes. If your instance supports Elastic Volumes, you can do so without detaching the volume or restarting the instance. This enables you to continue using your application while the changes take effect.

There is no charge to modify the configuration of a volume. You are charged for the new volume configuration after volume modification starts. For more information, see the [Amazon EBS Pricing](https://aws.amazon.com/ebs/pricing/) page.

**Topics**
+ [

## Considerations
](#elastic-volumes-considerations)
+ [

## Limitations
](#elastic-volumes-limitations)
+ [

# Requirements for Amazon EBS volume modifications
](modify-volume-requirements.md)
+ [

# Request Amazon EBS volume modifications
](requesting-ebs-volume-modifications.md)
+ [

# Monitor the progress of Amazon EBS volume modifications
](monitoring-volume-modifications.md)
+ [

# Extend the file system after resizing an Amazon EBS volume
](recognize-expanded-volume-linux.md)

## Considerations

+ After you initiate a volume modification, you must wait for that modification to reach the `completed` state before you can initiate another modification for the same volume. You can modify a volume up to four times within a rolling 24-hour period, as long as the volume is in the `in-use` or `available` state, and all previous modifications for that volume are `completed`. If you exceed this limit, you get an error message that indicates when you can perform your next modification.
+ Volume modifications are performed on a best-effort basis, and they can take from a few minutes to a few hours to complete, depending on the requested volume configuration. Typically, A 1-TiB volume can take up to six hours to be modified. However, the time does not always scale linearly with the volume size - a larger volume might take less time, and a smaller volume might take more time.
+ Size increases take effect once the volume modification reaches the `optimizing` state, which usually takes a few seconds.
+ Modification time is increased for volumes that are not fully initialized. For more information see [Manually initialize the volumes after creation](initalize-volume.md#ebs-initialize).
+ If you change the volume type from `gp2` to `gp3`, and you do not specify IOPS or throughput performance, Amazon EBS automatically provisions either equivalent performance to that of the source `gp2` volume, or the baseline `gp3` performance, whichever is higher.

  For example, if you modify a 500 GiB `gp2` volume with 250 MiB/s throughput and 1500 IOPS to `gp3` without specifying IOPS or throughput performance, Amazon EBS automatically provisions the `gp3` volume with 3000 IOPS (baseline `gp3` IOPS) and 250 MiB/s (to match the source `gp2` volume throughput).
+ If you encounter an error message while attempting to modify an EBS volume, or if you are modifying an EBS volume attached to a previous-generation instance type, take one of the following steps:
  + For a non-root volume, detach the volume from the instance, apply the modifications, and then re-attach the volume.
  + For a root volume, stop the instance, apply the modifications, and then restart the instance.

## Limitations

+ You can't cancel a volume modification request after it has been submitted.
+ You must increase the volume size. You can't decrease the volume size. However, you can create a smaller volume and then migrate your data to it using an application-level tool such as **rsync** (Linux instances) or **robocopy** (Windows instances).
+ There are limits to the maximum aggregated storage that can be requested across volume modifications. For more information, see [Amazon EBS service quotas](https://docs.aws.amazon.com/general/latest/gr/ebs-service.html#limits_ebs) in the *Amazon Web Services General Reference*.
+ The new volume size can't exceed the supported capacity of its file system and partitioning scheme. For more information, see [Amazon EBS volume constraints](volume_constraints.md).
+ If you are not changing the volume type, then volume size and performance modifications must be within the limits of the current volume type. If you are changing the volume type, then volume size and performance modifications must be within the limits of the target volume type. For more information, see [Amazon EBS volume types](ebs-volume-types.md)
+ [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) support volumes provisioned with up to 256,000 IOPS. Other instance types can be attached to volumes provisioned with up to 64,000 IOPS, but can achieve up to 32,000 IOPS.
+ You can't modify the volume type for Multi-Attach enabled `io2` volumes.
+ You can't modify the volume type, size, or Provisioned IOPS of Multi-Attach enabled `io1` volumes.
+ A root volume of type `io1`, `io2`, `gp2`, `gp3`, or `standard` can't be modified to an `st1` or `sc1` volume, even if it is detached from the instance.
+ If the volume was attached before November 3, 2016 23:40 UTC, you must initialize Elastic Volumes support. For more information, see [Initializing Elastic Volumes Support](requesting-ebs-volume-modifications.md#initialize-modification-support).
+ While `m3.medium` instances fully support volume modification, `m3.large`, `m3.xlarge`, and `m3.2xlarge` instances might not support all volume modification features.

# Requirements for Amazon EBS volume modifications
Requirements

The following requirements and limitations apply when you modify an Amazon EBS volume. To learn more about the general requirements for EBS volumes, see [Amazon EBS volume constraints](volume_constraints.md).

**Topics**
+ [

## Supported instance types
](#instance-support)
+ [

## Operating system
](#operating-system)

## Supported instance types


Elastic Volumes are supported on the following instances:
+ All [current generation instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#current-gen-instances)
+ The following previous-generation instances: C1, C3, C4, G2, I2, M1, M3, M4, R3, and R4

If your instance type does not support Elastic Volumes, see [Modify an EBS volume if Elastic Volumes is not supported](requesting-ebs-volume-modifications.md#modify-volume-stop-start).

## Operating system


The following operating system requirements apply:

### Linux


Linux AMIs require a GUID partition table (GPT) and GRUB 2 for boot volumes that are 2 TiB (2,048 GiB) or larger. Many Linux AMIs today still use the MBR partitioning scheme, which only supports boot volume sizes up to 2 TiB. If your instance does not boot with a boot volume larger than 2 TiB, the AMI you are using may be limited to a boot volume size of less than 2 TiB. Non-boot volumes do not have this limitation on Linux instances.

Before attempting to resize a boot volume beyond 2 TiB, you can determine whether the volume is using MBR or GPT partitioning by running the following command on your instance:

```
[ec2-user ~]$ sudo gdisk -l /dev/xvda
```

An Amazon Linux instance with GPT partitioning returns the following information:

```
GPT fdisk (gdisk) version 0.8.10
  
  Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present
  
  Found valid GPT with protective MBR; using GPT.
```

A SUSE instance with MBR partitioning returns the following information:

```
GPT fdisk (gdisk) version 0.8.8
  
  Partition table scan:
    MBR: MBR only
    BSD: not present
    APM: not present
    GPT: not present
```

### Windows


By default, Windows initializes volumes with a Master Boot Record (MBR) partition table. Because MBR supports only volumes smaller than 2 TiB (2,048 GiB), Windows prevents you from resizing MBR volumes beyond this limit. In such a case, the **Extend Volume** option is disabled in the Windows **Disk Management** utility. If you use the AWS Management Console or AWS CLI to create an MBR-partitioned volume that exceeds the size limit, Windows cannot detect or use the additional space.

To overcome this limitation, you can create a new, larger volume with a GUID partition table (GPT) and copy over the data from the original MBR volume. 

**To create a GPT volume**

1. Create a new, empty volume of the desired size in the Availability Zone of the EC2 instance and attach it to your instance. 
**Note**  
The new volume must not be a volume restored from a snapshot.

1. Log in to your Windows system and open **Disk Management** (**diskmgmt.exe**). 

1. Open the context (right-click) menu for the new disk and choose **Online**.

1. In the **Initialize Disk** window, select the new disk and choose **GPT (GUID Partition Table)**, **OK**.

1. When initialization is complete, copy the data from the original volume to the new volume, using a tool such as robocopy or teracopy.

1. In **Disk Management**, change the drive letters to appropriate values and take the old volume offline.

1. In the Amazon EC2 console, detach the old volume from the instance, reboot the instance to verify that it functions properly, and delete the old volume.

# Request Amazon EBS volume modifications
Request volume modifications

With Elastic Volumes, you can dynamically increase the size, increase or decrease the performance, and change the volume type of your Amazon EBS volumes without detaching them.

**Process overview**

1. (Optional) Before modifying a volume that contains valuable data, it is a best practice to create a snapshot of the volume in case you need to roll back your changes. For more information, see [Create Amazon EBS snapshots](ebs-creating-snapshot.md).

1. Request the volume modification.

1. Monitor the progress of the volume modification. For more information, see [Monitor the progress of Amazon EBS volume modifications](monitoring-volume-modifications.md).

1. If the size of the volume was modified, extend the volume's file system to take advantage of the increased storage capacity. For more information, see [Extend the file system after resizing an Amazon EBS volume](recognize-expanded-volume-linux.md).

**Topics**
+ [

## Modify an EBS volume using Elastic Volumes
](#modify-ebs-volume)
+ [

## Modify an EBS volume if Elastic Volumes is not supported
](#modify-volume-stop-start)
+ [

## Initialize Elastic Volumes support (if needed)
](#initialize-modification-support)

## Modify an EBS volume using Elastic Volumes


Before you begin, see the following:
+ [Considerations](ebs-modify-volume.md#elastic-volumes-considerations)
+ [Limitations](ebs-modify-volume.md#elastic-volumes-limitations)
+ [Requirements](modify-volume-requirements.md)

------
#### [ Console ]<a name="console-modify-size"></a>

**To modify an EBS volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volume to modify and choose **Actions**, **Modify volume**.

1. The **Modify volume** screen displays the volume ID and the volume's current configuration, including type, size, IOPS, and throughput. Set new configuration values as follows:
   + To modify the type, choose a value for **Volume type**.
   + To modify the size, enter a new value for **Size**.
   + (`gp3`, `io1`, and `io2` only) To modify the IOPS, enter a new value for **IOPS**.
   + (`gp3` only) To modify the throughput, enter a new value for **Throughput**.

1. After you have finished changing the volume settings, choose **Modify**. When prompted for confirmation, choose **Modify**.

1. If you've increased the size of your volume, then you must also extend the volume's partition to make use of the additional storage capacity. For more information, see [Extend the file system after resizing an Amazon EBS volume](recognize-expanded-volume-linux.md).

1. (*Windows instances only*) If you increase the size of an NVMe volume on an instance that does not have the AWS NVMe drivers, you must reboot the instance to enable Windows to see the new volume size. For more information about installing the AWS NVMe drivers, see [AWS NVMe drivers](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/aws-nvme-drivers.html).

------
#### [ AWS CLI ]

**To modify an EBS volume**  
Use the [modify-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-volume.html) command. For example, if you have a volume of type `gp2` with a size of 100 GiB, the following example changes its configuration to a volume of type `io1` with 10,000 IOPS and a size of 200 GiB.

```
aws ec2 modify-volume \
    --volume-id vol-01234567890abcdef \
    --volume-type io1 \
    --iops 10000 \
    --size 200
```

The following is example output.

```
{
    "VolumeModification": {
        "TargetSize": 200,
        "TargetVolumeType": "io1",
        "ModificationState": "modifying",
        "VolumeId": "vol-01234567890abcdef",
        "TargetIops": 10000,
        "StartTime": "2022-01-19T22:21:02.959Z",
        "Progress": 0,
        "OriginalVolumeType": "gp2",
        "OriginalIops": 300,
        "OriginalSize": 100
    }
}
```

If you've increased the size of your volume, then you must also extend the volume's partition to make use of the additional storage capacity. For more information, see [Extend the file system after resizing an Amazon EBS volume](recognize-expanded-volume-linux.md).

------
#### [ PowerShell ]

**To modify an EBS volume**  
Use the [Edit-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Edit-EC2Volume.html) cmdlet. For example, if you have a volume of type `gp2` with a size of 100 GiB, the following example changes its configuration to a volume of type `io1` with 10,000 IOPS and a size of 200 GiB.

```
Edit-EC2Volume `
    -VolumeId vol-01234567890abcdef `
    -VolumeType io1 `
    -Iops 10000 `
    -Size 200
```

If you've increased the size of your volume, then you must also extend the volume's partition to make use of the additional storage capacity. For more information, see [Extend the file system after resizing an Amazon EBS volume](recognize-expanded-volume-linux.md).

------

## Modify an EBS volume if Elastic Volumes is not supported


If you are using a supported instance type, you can use Elastic Volumes to dynamically modify the size, performance, and volume type of your Amazon EBS volumes without detaching them.

If you cannot use Elastic Volumes but you need to modify the root (boot) volume, you must stop the instance, modify the volume, and then restart the instance.

After the instance has started, you can check the file system size to see if your instance recognizes the larger volume space. On Linux, use the **df -h** command to check the file system size.

```
[ec2-user ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            7.9G  943M  6.9G  12% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
```

If the size does not reflect your newly expanded volume, you must extend the file system of your device so that your instance can use the new space. For more information, see [Extend the file system after resizing an Amazon EBS volume](recognize-expanded-volume-linux.md).

With Windows instances, you might have to bring the volume online in order to use it. For more information, see [Make an Amazon EBS volume available for use](ebs-using-volumes.md). You do not need to reformat the volume.

## Initialize Elastic Volumes support (if needed)


Before you can modify a volume that was attached to an instance before November 3, 2016 23:40 UTC, you must initialize volume modification support using one of the following actions:
+ Detach and attach the volume
+ Stop and start the instance

------
#### [ Console ]

**To determine whether your instances are ready**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. On the navigation pane, choose **Instances**.

1. Choose the **Show/Hide Columns** icon (the gear). Select the **Launch time** attribute column and then choose **Confirm**.

1. Sort the list of instances by the **Launch Time** column. For each instance that was started before the cutoff date, choose the **Storage** tab and check the **Attachment time** column to see when its volumes were attached.

------
#### [ AWS CLI ]

**To determine whether your instances are ready**  
Use the following [describe-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html) command to determine whether the volume was attached before November 3, 2016 23:40 UTC.

```
aws ec2 describe-instances \
    --query "Reservations[*].Instances[*].[InstanceId,LaunchTime<='2016-11-01',BlockDeviceMappings[*][Ebs.AttachTime<='2016-11-01']]" \
    --output text
```

The first line of the output for each instance shows its ID and whether it was started before the cutoff date (True or False). The first line is followed by one or more lines that show whether each EBS volume was attached before the cutoff date (True or False). In the following example output, you must initialize volume modification for the first instance because it was started before the cutoff date and its root volume was attached before the cutoff date. The other instances are ready because they were started after the cutoff date.

```
i-e905622e              True
True
i-719f99a8              False
True
i-006b02c1b78381e57     False
False
False
i-e3d172ed              False
True
```

------
#### [ PowerShell ]

**To determine whether an instance is ready**  
Use the [Get-EC2Instance](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2Instance.html) cmdlet to determine whether a volume was attached before November 3, 2016 23:40 UTC.

```
(Get-EC2Instance `
    -InstanceId i-1234567890abcdef0).Instances.BlockDeviceMappings | `
     Format-Table @{Name="VolumeId";Expression={$_.Ebs.VolumeId}}, `
                  @{Name="AttachTime";Expression={$_.Ebs.AttachTime}}
```

The following is example output.

```
VolumeId              AttachTime
--------              ----------
vol-0b243c8d927752d2b 3/23/2020 12:21:14 AM
vol-043eadbeb4a8387c3 9/5/2020 7:39:22 PM
vol-0c3f0c4e55c082753 4/23/2019 4:07:40 PM
```

------

# Monitor the progress of Amazon EBS volume modifications
Monitor modifications

When you modify an EBS volume, it goes through a sequence of states. The volume enters the `modifying` state, the `optimizing` state, and finally the `completed` state. At this point, the volume is ready to be further modified. 

While the volume is in the `optimizing` state, your volume performance is in between the source and target configuration specifications. Transitional volume performance will be no less than the source volume performance. If you are downgrading IOPS, transitional volume performance is no less than the target volume performance.

Volume modification changes take effect as follows:
+ Size increases take effect once the volume modification reaches the `optimizing` state, which usually takes a few seconds.
+ Performance (IOPS and throughput) changes can take from a few minutes to a few hours to complete, depending on the requested volume configuration. Typically, a fully used 1-TiB volume can take about 6 hours to migrate to a new performance configuration. In some cases, it can take more than 24 hours for a new performance configuration to take effect, such as when the volume has not been fully initialized.

The possible volume states are `creating`, `available`, `in-use`, `deleting`, `deleted`, and `error`.

The possible modification states are `modifying`, `optimizing`, and `completed`.

------
#### [ Console ]

**To monitor progress of a modification**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volume.

1. The **Volume state** column and the **Volume state** field in the **Details** tab contain information in the following format: *Volume state* - *Modification state* (*Modification progress*%). The following image shows the volume and volume modification states.  
![\[Volume and volume modification states\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/volume_state.png)

   After the modification completes, only the volume state is displayed. The modification state and progress are no longer displayed.

   Alternatively, you can use Amazon EventBridge to create a notification rule for volume modification events. For more information, see [Getting started with Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-get-started.html).

------
#### [ AWS CLI ]

**To monitor progress of a modification**  
Use the [describe-volumes-modifications](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volumes-modifications.html) command to view the progress of one or more volume modifications. The following example describes the volume modifications for two volumes.

```
aws ec2 describe-volumes-modifications \
    --volume-ids vol-11111111111111111 vol-22222222222222222
```

In the following example output, the volume modifications are still in the `modifying` state. Progress is reported as a percentage.

```
{
    "VolumesModifications": [
        {
            "TargetSize": 200,
            "TargetVolumeType": "io1",
            "ModificationState": "modifying",
            "VolumeId": "vol-11111111111111111",
            "TargetIops": 10000,
            "StartTime": "2017-01-19T22:21:02.959Z",
            "Progress": 0,
            "OriginalVolumeType": "gp2",
            "OriginalIops": 300,
            "OriginalSize": 100
        },
        {
            "TargetSize": 2000,
            "TargetVolumeType": "sc1",
            "ModificationState": "modifying",
            "VolumeId": "vol-22222222222222222",
            "StartTime": "2017-01-19T22:23:22.158Z",
            "Progress": 0,
            "OriginalVolumeType": "gp2",
            "OriginalIops": 300,
            "OriginalSize": 1000
        }
    ]
}
```

The next example describes all volumes with a modification state of either `optimizing` or `completed`, and then filters and formats the results to show only modifications that were initiated on or after February 1, 2017:

```
aws ec2 describe-volumes-modifications \
    --filters Name=modification-state,Values="optimizing","completed" \
    --query "VolumesModifications[?StartTime>='2017-02-01'].{ID:VolumeId,STATE:ModificationState}"
```

The following is example output with information about two volumes:

```
[
    {
        "STATE": "optimizing",
        "ID": "vol-06397e7a0eEXAMPLE"
    },
    {
        "STATE": "completed",
        "ID": "vol-ba74e18c2aEXAMPLE"
    }
]
```

------
#### [ PowerShell ]

**To monitor progress of a modification**  
Use the [Get-EC2VolumeModification](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2VolumeModification.html) cmdlet. The following example describes the volume modifications for two volumes.

```
Get-EC2VolumeModification `
    -VolumeId vol-11111111111111111 vol-22222222222222222
```

------

**Note**  
Rarely, a transient AWS fault can result in a `failed` state. This is not an indication of volume health; it merely indicates that the modification to the volume failed. If this occurs, retry the volume modification.

# Extend the file system after resizing an Amazon EBS volume
Extend the file system

After you [increase the size of an EBS volume](requesting-ebs-volume-modifications.md), you must extend the partition and file system to the new, larger size. You can do this as soon as the volume enters the `optimizing` state.

## Before you begin

+ Create a snapshot of the volume, in case you need to roll back your changes. For more information, see [Create Amazon EBS snapshots](ebs-creating-snapshot.md).
+ Confirm that the volume modification succeeded and that it is in the`optimizing` or `completed` state. For more information, see [Monitor the progress of Amazon EBS volume modifications](monitoring-volume-modifications.md).
+ Ensure that the volume is attached to the instance and that it is formatted and mounted. For more information, see [Format and mount an attached volume](ebs-using-volumes.md#ebs-format-mount-volume).
+ (*Linux instances only*) If you are using logical volumes on the Amazon EBS volume, you must use Logical Volume Manager (LVM) to extend the logical volume. For instructions about how to do this, see the **Extend the LV** section in the article [ How do I use LVM to create a logical volume on an EBS volume's partition?](https://repost.aws/knowledge-center/create-lv-on-ebs-partition).

## Linux instances


**Note**  
The following instructions walk you through the process of extending **XFS** and **Ext4** file systems for Linux. For information about extending a different file system, see its documentation.

Before you can extend a file system on Linux, you must extend the partition, if your volume has one.

### Extend the file system of EBS volumes


Use the following procedure to extend the file system for a resized volume.

Note that device and partition naming differs for Xen instances and instances built on the Nitro System. To determine whether your instance is Xen-based or Nitro-based, see [ Amazon EC2 hypervisor type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#instance-hypervisor-type).

**To extend the file system of EBS volumes**

1. [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html).

1. Resize the partition, if needed. To do so:

   1. Check whether the volume has a partition. Use the **lsblk** command.

------
#### [ Nitro instance example ]

      In the following example output, the root volume (`nvme0n1`) has two partitions (`nvme0n1p1` and `nvme0n1p128`), while the additional volume (`nvme1n1`) has no partitions.

      ```
      [ec2-user ~]$ sudo lsblk
      NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      nvme1n1       259:0    0  30G  0 disk /data
      nvme0n1       259:1    0  16G  0 disk
      └─nvme0n1p1   259:2    0   8G  0 part /
      └─nvme0n1p128 259:3    0   1M  0 part
      ```

------
#### [ Xen instance example ]

      In the following example output, the root volume (`xvda`) has a partition (`xvda1`), while the additional volume (`xvdf`) has no partition.

      ```
      [ec2-user ~]$ sudo lsblk                
      NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      xvda    202:0    0  16G  0 disk
      └─xvda1 202:1    0   8G  0 part /
      xvdf    202:80   0  24G  0 disk
      ```

------
      + If the volume has a partition, continue to the next step (2b).
      + If the volume has no partitions, skip steps 2b, 2c, and 2d, and continue to step 3.
**Troubleshooting tip**  
If you do not see the volume in the command output, ensure that the volume is [attached to the instance](ebs-attaching-volume.md), and that it is [formatted and mounted](ebs-using-volumes.md#ebs-format-mount-volume).

   1. Check whether the partition needs to be extended. In the **lsblk** command output from the previous step, compare the partition size and the volume size.
      + If the partition size is smaller than the volume size, continue to the next step (2c).
      + If the partition size is equal to the volume size, the partition does not need to be extended - skip steps 2c and 2d, and continue to step 3.
**Troubleshooting tip**  
If the volume still reflects the original size, [ confirm that the volume modification succeeded](monitoring-volume-modifications.md).

   1. Extend the partition. Use the **growpart** command and specify the device name and the partition number.

------
#### [ Nitro instance example ]

      The partition number is the number after the `p`. For example, for `nvme0n1p1`, the partition number is `1`. For `nvme0n1p128`, the partition number is `128`.

      To extend a partition named `nvme0n1p1`, use the following command.

**Important**  
Note the space between the device name (`nvme0n1`) and the partition number (`1`).

      ```
      [ec2-user ~]$ sudo growpart /dev/nvme0n1 1
      ```

------
#### [ Xen instance example ]

      The partition number is the number after the device name. For example, for `xvda1`, the partition number is `1`. For `xvda128`, the partition number is `128`.

      To extend a partition named `xvda1`, use the following command.

**Important**  
Note the space between the device name (`xvda`) and the partition number (`1`).

      ```
      [ec2-user ~]$ sudo growpart /dev/xvda 1
      ```

------
**Troubleshooting tips**  
`mkdir: cannot create directory ‘/tmp/growpart.31171’: No space left on device FAILED: failed to make temp dir`: Indicates that there is not enough free disk space on the volume for growpart to create the temporary directory it needs to perform the resize. Free up some disk space and then try again.
`must supply partition-number`: Indicates that you specified an incorrect partition. Use the **lsblk** command to confirm the partition name, and ensure that you enter a space between the device name and the partition number.
`NOCHANGE: partition 1 is size 16773087. it cannot be grown`: Indicates that the partition already extends the entire volume and can't be extended. [Confirm that the volume modification succeeded](monitoring-volume-modifications.md).

   1. Verify that the partition has been extended. Use the **lsblk** command. The partition size should now be equal to the volume size.

------
#### [ Nitro instance example ]

      The following example output shows that both the volume (`nvme0n1`) and the partition (`nvme0n1p1`) are the same size (`16 GB`).

      ```
      [ec2-user ~]$ sudo lsblk
      NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      nvme1n1       259:0    0  30G  0 disk /data
      nvme0n1       259:1    0  16G  0 disk
      └─nvme0n1p1   259:2    0  16G  0 part /
      └─nvme0n1p128 259:3    0   1M  0 part
      ```

------
#### [ Xen instance example ]

      The following example output shows that both the volume (`xvda`) and the partition (`xvda1`) are the same size (`16 GB`).

      ```
      [ec2-user ~]$ sudo lsblk               
      NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      xvda    202:0    0  16G  0 disk
      └─xvda1 202:1    0  16G  0 part /
      xvdf    202:80   0  24G  0 disk
      ```

------

1. Extend the file system.

   1. Get the name, size, type, and mount point for the file system that you need to extend. Use the **df -hT** or **lsblk -f** command.

------
#### [ Nitro instance example ]

      The following example output for the **df -hT** command shows that the `/dev/nvme0n1p1` file system is 8 GB in size, its type is `xfs`, and its mount point is `/`.

      ```
      [ec2-user ~]$ df -hT
      Filesystem      Type  Size  Used Avail Use% Mounted on
      /dev/nvme0n1p1  xfs   8.0G  1.6G  6.5G  20% /
      /dev/nvme1n1    xfs   8.0G   33M  8.0G   1% /data
      ...
      ```

------
#### [ Xen instance example ]

      The following example output for the **df -hT** command shows that the `/dev/xvda1` file system is 8 GB in size, its type is `ext4`, and its mount point is `/`.

      ```
      [ec2-user ~]$ df -hT
      Filesystem      Type   Size    Used   Avail   Use%   Mounted on
      /dev/xvda1      ext4   8.0G    1.9G   6.2G    24%    /
      /dev/xvdf1      xfs    24.0G   45M    8.0G    1%     /data
      ...
      ```

------
      + If the file system size is smaller than the volume size, continue to the next step (3b).
      + If the file system size is equal to the volume size, then it does not need to be extended. In this case, skip the remaining steps - the partition and file system have been extended to the new volume size.

       

   1. The commands to extend the file system differ depending on the file system type. Choose the following correct command based on the file system type that you noted in the previous step.
      + **[XFS file system]** Use the **xfs\$1growfs** command and specify the mount point of the file system that you noted in the previous step.

------
#### [ Nitro and Xen instance example ]

        For example, to extend a file system mounted on `/`, use the following command.

        ```
        [ec2-user ~]$ sudo xfs_growfs -d /
        ```

------
**Troubleshooting tips**  
`xfs_growfs: /data is not a mounted XFS filesystem`: Indicates that you specified the incorrect mount point, or the file system is not XFS. To verify the mount point and file system type, use the **df -hT** command.
`data size unchanged, skipping`: Indicates that the file system already extends the entire volume. If the volume has no partitions, [ confirm that the volume modification succeeded](monitoring-volume-modifications.md). If the volume has partitions, ensure that the partition was extended as described in step 2.
      + **[Ext4 file system]** Use the **resize2fs** command and specify the name of the file system that you noted in the previous step.

------
#### [ Nitro instance example ]

        For example, to extend a file system mounted named `/dev/nvme0n1p1`, use the following command.

        ```
        [ec2-user ~]$ sudo resize2fs /dev/nvme0n1p1
        ```

------
#### [ Xen instance example ]

        For example, to extend a file system mounted named `/dev/xvda1`, use the following command.

        ```
        [ec2-user ~]$ sudo resize2fs /dev/xvda1
        ```

------
**Troubleshooting tips**  
`resize2fs: Bad magic number in super-block while trying to open /dev/xvda1`: Indicates that the file system is not Ext4. To verify file the system type, use the **df -hT** command.
`open: No such file or directory while opening /dev/xvdb1`: Indicates that you specified an incorrect partition. To verify the partition, use the **df -hT** command.
`The filesystem is already 3932160 blocks long. Nothing to do!`: Indicates that the file system already extends the entire volume. If the volume has no partitions, [confirm that the volume modification succeeded](monitoring-volume-modifications.md). If the volume has partitions, ensure that the partition was extended, as described in step 2.
      + **[Other file system]** See the documentation for your file system for instructions.

   1. Verify that the file system has been extended. Use the **df -hT** command and confirm that the file system size is equal to the volume size.

## Windows instances


Use one of the following methods to extend the file system on a Windows instance.

------
#### [ Disk Management utility ]

**To extend a file system using Disk Management**

1. Before extending a file system that contains valuable data, it is a best practice to create a snapshot of the volume that contains it in case you need to roll back your changes. For more information, see [Create Amazon EBS snapshots](ebs-creating-snapshot.md).

1. Log in to your Windows instance using Remote Desktop.

1. In the **Run** dialog, enter **diskmgmt.msc** and press Enter. The Disk Management utility opens.  
![\[Windows Server Disk Management Utility\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/Expand-Volume-Win2008-before.png)

1. On the **Disk Management** menu, choose **Action**, **Rescan Disks**.

1. Open the context (right-click) menu for the expanded drive and choose **Extend Volume**.
**Note**  
**Extend Volume** might be disabled (grayed out) if:  
The unallocated space is not adjacent to the drive. The unallocated space must be adjacent to the right side of the drive you want to extend.
The volume uses the Master Boot Record (MBR) partition style and it is already 2TB in size. Volumes that use MBR cannot exceed 2TB in size.  
![\[Windows Server Disk Management Utility\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/Expand-Volume-Win2008-before-menu.png)

1. In the **Extend Volume** wizard, choose **Next**. For **Select the amount of space in MB**, enter the number of megabytes by which to extend the volume. Generally, you specify the maximum available space. The highlighted text under **Selected** is the amount of space that is added, not the final size the volume will have. Complete the wizard.  
![\[Windows Server Extend Volume Wizard\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/Extend-Volume-Wizard-Win2008.png)

1. If you increase the size of an NVMe volume on an instance that does not have the AWS NVMe driver, you must reboot the instance to enable Windows to see the new volume size. For more information about installing the AWS NVMe driver, see [AWS NVMe drivers](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/aws-nvme-drivers.html).

------
#### [ PowerShell ]

Use the following procedure to extend a Windows file system using PowerShell.

**To extend a file system using PowerShell**

1. Before extending a file system that contains valuable data, it is a best practice to create a snapshot of the volume that contains it in case you need to roll back your changes. For more information, see [Create Amazon EBS snapshots](ebs-creating-snapshot.md).

1. Log in to your Windows instance using Remote Desktop.

1. Run PowerShell as an administrator.

1. Run the `Get-Partition` command. PowerShell returns the corresponding partition number for each partition, the drive letter, offset, size, and type. Note the drive letter of the partition to extend.

1. Run the following command to rescan the disk.

   ```
   "rescan" | diskpart
   ```

1. Run the following command, using the drive letter you noted in step 4 in place of **<drive-letter>**. PowerShell returns the minimum and maximum size of the partition allowed, in bytes.

   ```
   Get-PartitionSupportedSize -DriveLetter <drive-letter>
   ```

1. To extend the partition to a specified amount, run the following command, entering the new size of the volume in place of **<size>**. You can enter the size in `KB`, `MB`, and `GB`; for example, `50GB`.

   ```
   Resize-Partition -DriveLetter <drive-letter> -Size <size>
   ```

   To extend the partition to the maximum available size, run the following command.

   ```
   Resize-Partition -DriveLetter <drive-letter> -Size $(Get-PartitionSupportedSize -DriveLetter <drive-letter>).SizeMax
   ```

   The following PowerShell commands show the complete command and response flow for extending a file system to a specific size.  
![\[Extend a partition using PowerShell - specific\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/ebs-extend-powershell-v3-specific.png)

   The following PowerShell commands show the complete command and response flow for extending a file system to the maximum available size.  
![\[Extend a partition using PowerShell - max\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/ebs-extend-powershell-v3-max.png)

------

# Detach an Amazon EBS volume from an Amazon EC2 instance
Detach a volume from an instance

You need to detach an Amazon Elastic Block Store (Amazon EBS) volume from an instance before you can attach it to a different instance or delete it. Detaching a volume does not affect the data on the volume.

**Topics**
+ [

## Considerations
](#considerations)
+ [

## Unmount and detach a volume
](#umount-detach-volume)
+ [

## Troubleshoot
](#detach-troubleshoot)

## Considerations

+ You can detach an Amazon EBS volume from an instance explicitly or by terminating the instance. However, if the instance is running, you must first unmount the volume from the instance.
+ If an EBS volume is the root device of an instance, you must stop the instance before you can detach the volume.
+ You can reattach a volume that you detached (without unmounting it), but it might not get the same mount point. If there were writes to the volume in progress when it was detached, the data on the volume might be out of sync.
+ After you detach a volume, you are still charged for volume storage as long as the storage amount exceeds the limit of the AWS Free Tier. You must delete a volume to avoid incurring further charges. For more information, see [Delete an Amazon EBS volume](ebs-deleting-volume.md).

## Unmount and detach a volume


Use the following procedures to unmount and detach a volume from an instance. This can be useful when you need to attach the volume to a different instance or when you need to delete the volume.

**Topics**
+ [

### Step 1: Unmount the volume
](#unmount)
+ [

### Step 2: Detach the volume from the instance
](#detach)
+ [

### Step 3: (*Windows instances only*) Uninstall the offline device locations
](#uninstall)

### Step 1: Unmount the volume


#### Linux instances


From your Linux instance, use the following command to unmount the `/dev/sdh` device.

```
[ec2-user ~]$ sudo umount -d /dev/sdh
```

#### Windows instances


From your Windows instance, unmount the volume as follows.

1. Start the Disk Management utility.
   + (Windows Server 2012 and later) On the taskbar, right-click the Windows logo and choose **Disk Management**.
   + Windows Server 2008) Choose **Start**, **Administrative Tools**, **Computer Management**, **Disk Management**.

1. Right-click the disk (for example, right-click **Disk 1**) and then choose **Offline**. Wait for the disk status to change to **Offline** before opening the Amazon EC2 console.

### Step 2: Detach the volume from the instance


To detach the volume from the instance, use one of the following methods:

------
#### [ Console ]

**To detach an EBS volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**. 

1. Select the volume.

1. Choose **Actions**, **Detach volume**. 

1. When prompted for confirmation, choose **Detach**.

------
#### [ AWS CLI ]

**To detach an EBS volume from an instance**  
After unmounting the volume, use the [detach-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/detach-volume.html) command.

```
aws ec2 detach-volume --volume-id vol-01234567890abcdef
```

------
#### [ PowerShell ]

**To detach an EBS volume from an instance**  
After unmounting the volume, use the [Dismount-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Dismount-EC2Volume.html) cmdlet.

```
Dismount-EC2Volume -VolumeId vol-01234567890abcdef
```

------

### Step 3: (*Windows instances only*) Uninstall the offline device locations


When you unmount and detach a volume from an instance, Windows flags the device location as offline. The device location remains offline after rebooting, and stopping and restarting the instance. When you restart the instance, Windows might mount one of the remaining volumes to the offline device location. This causes the volume to be unavailable in Windows. To prevent this from happening and to ensure that all volumes are attached to online device locations the next time Windows starts, perform the following steps:

1. On the instance, open the Device Manager.

1. In the Device Manager, select **View**, **Show hidden devices**.

1. In the list of devices, expand the **Storage controllers** node.

   The device locations to which the detached volumes were mounted are named `AWS NVMe Elastic Block Storage Adapter` and they should appear greyed out.

1. Right-click each greyed out device location named `AWS NVMe Elastic Block Storage Adapter`, select **Uninstall device** and choose **Uninstall**.
**Important**  
Do not select the **Delete the driver software for this device** check box.

## Troubleshoot


The following are common problems encountered when detaching volumes, and how to resolve them.

**Note**  
To guard against the possibility of data loss, take a snapshot of your volume before attempting to unmount it. Forced detachment of a stuck volume can cause damage to the file system or the data it contains or an inability to attach a new volume using the same device name, unless you reboot the instance.
+ If you encounter problems while detaching a volume through the Amazon EC2 console, it can be helpful to use the **describe-volumes** CLI command to diagnose the issue. For more information, see [describe-volumes](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volumes.html).
+ If your volume stays in the `detaching` state, you can force the detachment by choosing **Force Detach**. Use this option only as a last resort to detach a volume from a failed instance, or if you are detaching a volume with the intention of deleting it. The instance doesn't get an opportunity to flush file system caches or file system metadata. If you use this option, you must perform the file system check and repair procedures. 
+ If you've tried to force the volume to detach multiple times over several minutes and it stays in the `detaching` state, you can post a request for help to [AWS re:Post](https://repost.aws/). To help expedite a resolution, include the volume ID and describe the steps that you've already taken.
+ When you attempt to detach a volume that is still mounted, the volume can become stuck in the `busy` state while it is trying to detach. The following output from **describe-volumes** shows an example of this condition:

  ```
  "Volumes": [
      {
          "AvailabilityZone": "us-west-2b",
          "Attachments": [
              {
                  "AttachTime": "2022-07-21T23:44:52.000Z",
                  "InstanceId": "i-1234567890abcdef0",
                  "VolumeId": "vol-01234567890abcdef",
                  "State": "busy",
                  "DeleteOnTermination": false,
                  "Device": "/dev/sdf"
              }
          ...
      }
  ]
  ```

  When you encounter this state, detachment can be delayed indefinitely until you unmount the volume, force detachment, reboot the instance, or all three.

# Delete an Amazon EBS volume
Delete a volume

You can delete an Amazon EBS volume that you no longer need. After deletion, its data is gone and the volume can't be attached to any instance. So before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later. 

You can't delete a volume if it's attached to an instance. To delete a volume, you must first detach it. For more information, see [Detach an Amazon EBS volume from an Amazon EC2 instance](ebs-detaching-volume.md).

If you delete a volume that matches a Recycle Bin retention rule, the volume is retained in the Recycle Bin instead of being immediately deleted. For more information, see [Recycle Bin](recycle-bin.md).

------
#### [ Console ]

**To delete an EBS volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volume. Verify that the volume is in the **Available** state.

1. Choose **Actions**, **Delete volume**.

   If this option is disabled, the volume is attached to an instance and can't be deleted.

1. When prompted for confirmation, enter **delete**, and then choose **Delete**.

------
#### [ AWS CLI ]

**To check whether an EBS volume is in use**  
Use the [describe-volumes](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volumes.html) command. If the volume is in use, the state is `in-use`. Otherwise, it is `available`.

```
aws ec2 describe-volumes \
    --volume-id vol-01234567890abcdef \
    --query Volumes[*].State \
    --output text
```

**To delete an EBS volume**  
Use the [delete-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/delete-volume.html) command.

```
aws ec2 delete-volume --volume-id vol-01234567890abcdef
```

------
#### [ PowerShell ]

**To check whether an EBS volume is in use**  
Use the [Get-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2Volume.html) cmdlet. If the volume is in use, the state is `in-use`. Otherwise, it is `available`.

```
(Get-EC2Volume `
    -VolumeId vol-01234567890abcdef).State.Value
```

**To delete an EBS volume**  
Use the [Remove-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-EC2Volume.html) cmdlet.

```
Remove-EC2Volume -VolumeId vol-01234567890abcdef
```

------

# Replace an Amazon EBS volume using a snapshot
Replace a volume

Amazon EBS snapshots are the preferred backup tool on Amazon EC2 because of their speed, convenience, and cost. When creating a volume from a snapshot, you recreate its state at a specific point in time with the data saved up to that specific point intact. By attaching a volume created from a snapshot to an instance, you can duplicate data across Regions, create test environments, replace a damaged or corrupted production volume in its entirety, or retrieve specific files and directories and transfer them to another attached volume. For more information, see [Amazon EBS snapshots](ebs-snapshots.md).

You can use one of the following procedures to replace an Amazon EBS volume with another volume created from a previous snapshot of that volume.

**Requirement**  
You must create the volume in the same Availability Zone as the instance. Volumes must be attached to instances in the same Availability Zone.

------
#### [ Console ]

**To replace a volume**

1. Create a volume from the snapshot and write down the ID of the new volume. For more information, see [Create an Amazon EBS volume](ebs-creating-volume.md).

1. On the Instances page, select the instance on which to replace the volume and write down the instance ID.

   With the instance still selected, choose the **Storage** tab. In the **Block devices** section, find the volume to replace and write down the device name for the volume, for example `/dev/sda1`.

1. On the **Storage** tab, choose the volume ID, and then [ unmount and detach the volume from the instance](ebs-detaching-volume.md#umount-detach-volume).

1. Select the new volume that you created in step 1 and choose **Actions**, **Attach volume**.

   For **Instance** and **Device name**, enter the instance ID and device name that you wrote down in Step 2, and then choose **Attach volume**.

1. Connect to your instance and mount the volume. For more information, see [Make an Amazon EBS volume available for use](ebs-using-volumes.md).

------
#### [ AWS CLI ]

**To replace a volume**

1. Create a new volume from the snapshot. Use the [create-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-volume.html) command with the `--snapshot-id` option. For `--availability-zone`, specify the same Availability Zone as the instance. Note the ID of the new volume in the output.

   ```
   aws ec2 create-volume \
       --volume-type gp3 \
       --snapshot-id snap-0abcdef1234567890 \
       --availability-zone us-east-1a
   ```

1. Get the device name of the volume to replace. Use the [describe-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html) command. For `--instance-ids`, specify the ID of the instance on which to replace the volume. Note the device name and volume ID of the volume to replace.

   ```
   aws ec2 describe-instances \
       --instance-ids i-1234567890abcdef0 \
       --query Reservations[].Instances[].BlockDeviceMappings
   ```

1. Detach the volume to replace from the instance. Use the [detach-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/detach-volume.html) command.

   ```
   aws ec2 detach-volume --volume-id vol-xxxxxxxxxxxxxxxxx
   ```

1. Attach the replacement volume to the instance. Use the [attach-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/attach-volume.html) command. For `--volume-id`, specify the ID of the replacement volume. For `--instance-id`, specify the ID of the instance on which to attach the volume. For `--device`, specify the same device name that you noted previously.

   ```
   aws ec2 attach-volume \
       --volume-id vol-01234567890abcdef \
       --instance-id i-1234567890abcdef0 \
       --device /dev/sdf
   ```

1. Connect to your instance and mount the volume. For more information, see [Make an Amazon EBS volume available for use](ebs-using-volumes.md).

------
#### [ PowerShell ]

**To replace a volume**

1. Create a new volume from the snapshot. Use the [New-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2Volume.html) cmdlet with the `-SnapshotId` option. For `-AvailabilityZone`, specify the same Availability Zone as the instance. Note the ID of the new volume in the output.

   ```
   New-EC2Volume `
       -VolumeType gp3 `
       -SnapshotId snap-0abcdef1234567890 `
       -AvailabilityZone us-east-1a
   ```

1. Get the device name of the volume to replace. Use the [Get-EC2Instance](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2Instance.html) cmdlet. For `-InstanceId`, specify the ID of the instance on which to replace the volume. Note the device name and volume ID of the volume to replace.

   ```
   (Get-EC2Instance `
       -InstanceId i-1234567890abcdef0).Instances.BlockDeviceMappings | `
        Format-Table DeviceName, @{Name="VolumeId";Expression={$_.Ebs.VolumeId}}
   ```

1. Detach the volume to replace from the instance. Use the [Dismount-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Dismount-EC2Volume.html) cmdlet.

   ```
   DismountEC2Volume -VolumeId vol-xxxxxxxxxxxxxxxxx
   ```

1. Attach the replacement volume to the instance. Use the [Add-EC2Volume](https://docs.aws.amazon.com/powershell/latest/reference/items/Add-EC2Volume.html) cmdlet. For `-VolumeId`, specify the ID of the replacement volume. For `-InstanceId`, specify the ID of the instance on which to attach the volume. For `-Device`, specify the same device name that you noted previously.

   ```
   Add-EC2Volume`
       -VolumeId vol-01234567890abcdef `
       -InstanceId i-1234567890abcdef0 `
       -Device /dev/sdf
   ```

1. Connect to your instance and mount the volume. For more information, see [Make an Amazon EBS volume available for use](ebs-using-volumes.md).

------

# Amazon EBS volume status checks
Status checks

Volume status checks enable you to better understand, track, and manage potential inconsistencies in the data on an Amazon EBS volume. They are designed to provide you with the information that you need to determine whether your Amazon EBS volumes are impaired, and to help you control how a potentially inconsistent volume is handled.

Volume status checks are automated tests that run every 5 minutes and return a pass or fail status. If all checks pass, the status of the volume is `ok`. If a check fails, the status of the volume is `impaired`. If the status is `insufficient-data`, the checks may still be in progress on the volume. You can view the results of volume status checks to identify any impaired volumes and take any necessary actions.

When Amazon EBS determines that a volume's data is potentially inconsistent, the default is that it disables I/O to the volume from any attached EC2 instances, which helps to prevent data corruption. After I/O is disabled, the next volume status check fails, and the volume status is `impaired`. In addition, you'll see an event that lets you know that I/O is disabled, and that you can resolve the impaired status of the volume by enabling I/O to the volume. We wait until you enable I/O to give you the opportunity to decide whether to continue to let your instances use the volume, or to run a consistency check using a command, such as **fsck** (Linux instances) or **chkdsk** (Windows instances), before doing so.

**Note**  
Volume status is based on the volume status checks, and does not reflect the volume state. Therefore, volume status does not indicate volumes in the `error` state (for example, when a volume is incapable of accepting I/O.) For information about volume states, see [Volume states](ebs-describing-volumes.md#volume-state).

If the consistency of a particular volume is not a concern, and you'd prefer that the volume be made available immediately if it's impaired, you can override the default behavior by configuring the volume to automatically enable I/O. If you enable the **Auto-Enable IO** volume attribute (`autoEnableIO` in the API), the volume status check continues to pass. In addition, you'll see an event that lets you know that the volume was determined to be potentially inconsistent, but that its I/O was automatically enabled. This enables you to check the volume's consistency or replace it at a later time.

The I/O performance status check compares actual volume performance to the expected performance of a volume. It alerts you if the volume is performing below expectations. This status check is available only for Provisioned IOPS SSD (`io1` and `io2`) and General Purpose SSD (`gp3`) volumes that are attached to an instance. The status check is not valid for General Purpose SSD (`gp2`), Throughput Optimized HDD (`st1`), Cold HDD (`sc1`), or Magnetic(`standard`) volumes. The I/O performance status check is performed once every minute, and CloudWatch collects this data every 5 minutes. It might take up to 5 minutes from the moment that you attach an `io1` or `io2` volume to an instance for the status check to report the I/O performance status.

**Important**  
While initializing Provisioned IOPS SSD volumes that were restored from snapshots, the performance of the volume may drop below 50 percent of its expected level, which causes the volume to display a `warning` state in the **I/O Performance** status check. This is expected, and you can ignore the `warning` state on Provisioned IOPS SSD volumes while you are initializing them. For more information, see [Manually initialize the volumes after creation](initalize-volume.md#ebs-initialize).

The following table lists statuses for Amazon EBS volumes.


| Volume status | I/O enabled status | I/O performance status (`io1`, `io2`, and `gp3` volumes only) | 
| --- | --- | --- | 
|  `ok`  |  Enabled (I/O Enabled or I/O Auto-Enabled)  |  Normal (Volume performance is as expected)  | 
|  `warning`  |  Enabled (I/O Enabled or I/O Auto-Enabled)  |  Degraded (Volume performance is below expectations) Severely Degraded (Volume performance is well below expectations)  | 
|  `impaired`  |  Enabled (I/O Enabled or I/O Auto-Enabled) Disabled (Volume is offline and pending recovery, or is waiting for the user to enable I/O)  |  Stalled (Volume performance is severely impacted) Not Available (Unable to determine I/O performance because I/O is disabled)  | 
|  `insufficient-data`  |  Enabled (I/O Enabled or I/O Auto-Enabled) Insufficient Data  |  Insufficient Data  | 

------
#### [ Console ]

**To view status checks**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

   The **Volume status** column displays the operational status of each volume.

1. To view the status details of a specific volume, select it in the grid and choose the **Status checks** tab.

1. If you have a volume with a failed status check (status is `impaired`), see [Work with an impaired Amazon EBS volume](work_volumes_impaired.md).

Alternatively, you can choose **Events** in the navigator to view all the events for your instances and volumes. For more information, see [Amazon EBS volume events](monitoring-vol-events.md).

------
#### [ AWS CLI ]

**To view volume status information**  
Use the [describe-volume-status](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volume-status.html) command.

```
aws ec2 describe-volume-status --volume-ids vol-01234567890abcdef
```

Use the following example to identify impaired volumes.

```
aws ec2 describe-volume-status --filters Name=volume-status.status,Values=impaired
```

------
#### [ PowerShell ]

**To view volume status information**  
Use the [Get-EC2VolumeStatus](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2VolumeStatus.html) cmdlet.

```
Get-EC2VolumeStatus -VolumeId vol-01234567890abcdef
```

Use the following example to identify impaired volumes.

```
Get-EC2VolumeStatus -Filter @{Name="volume-status.status"; Values="impaired"}
```

------

# Amazon EBS volume events
Volume events

When Amazon EBS determines that a volume's data is potentially inconsistent, it disables I/O to the volume from any attached EC2 instances by default. This causes the volume status check to fail, and creates a volume status event that indicates the cause of the failure. 

To automatically enable I/O on a volume with potential data inconsistencies, change the setting of the **Auto-Enabled IO** volume attribute (`autoEnableIO` in the API). For more information about changing this attribute, see [Work with an impaired Amazon EBS volume](work_volumes_impaired.md).

Each event includes a start time that indicates the time at which the event occurred, and a duration that indicates how long I/O for the volume was disabled. The end time is added to the event when I/O for the volume is enabled.Volume status events

`Awaiting Action: Enable IO`  
Volume data is potentially inconsistent. I/O is disabled for the volume until you explicitly enable it. The event description changes to **IO Enabled** after you explicitly enable I/O.

`IO Enabled`  
I/O operations were explicitly enabled for this volume.

`IO Auto-Enabled`  
I/O operations were automatically enabled on this volume after an event occurred. We recommend that you check for data inconsistencies before continuing to use the data.

`Normal`  
For `io1`, `io2`, and `gp3` volumes only. Volume performance is as expected.

`Degraded`  
For `io1`, `io2`, and `gp3` volumes only. Volume performance is below expectations.

`Severely Degraded`  
For `io1`, `io2`, and `gp3` volumes only. Volume performance is well below expectations.

`Stalled`  
For `io1`, `io2`, and `gp3` volumes only. Volume performance is severely impacted.

If you have a volume where I/O is disabled, see [Work with an impaired Amazon EBS volume](work_volumes_impaired.md). If you have a volume where I/O performance is below normal, this might be a temporary condition due to an action you have taken (for example, creating a snapshot of a volume during peak usage, running the volume on an instance that cannot support the I/O bandwidth required, accessing data on the volume for the first time, etc.).

------
#### [ Console ]

**To view events for your volumes**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Events**. All instances and volumes that have events are listed.

1. You can filter by volume to view only volume status. You can also filter on specific status types.

1. Select a volume to view its specific event.

------
#### [ AWS CLI ]

**To view events for your volumes**  
Use the [describe-volume-status](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volume-status.html) command.

```
aws ec2 describe-volume-status --volume-ids vol-01234567890abcdef
```

------
#### [ PowerShell ]

**To view events for your volumes**  
Use the [Get-EC2VolumeStatus](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2VolumeStatus.html) cmdlet.

```
Get-EC2VolumeStatus -VolumeId vol-01234567890abcdef
```

------

# Work with an impaired Amazon EBS volume
Work with an impaired volume

Use the following options if a volume is impaired because the volume's data is potentially inconsistent.

**Topics**
+ [

## Option 1: Perform a consistency check on the volume attached to its instance
](#work_volumes_impaired_option1)
+ [

## Option 2: Perform a consistency check on the volume using another instance
](#work_volumes_impaired_option2)
+ [

## Option 3: Delete the volume if you no longer need it
](#work_volumes_impaired_option3)

## Option 1: Perform a consistency check on the volume attached to its instance


The simplest option is to enable I/O and then perform a data consistency check on the volume while the volume is still attached to its Amazon EC2 instance.

**To perform a consistency check on an attached volume**

1. Stop any applications from using the volume.

1. Enable I/O on the volume. Use one of the following methods.

------
#### [ Console ]

**To enable I/O for a volume**

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. In the navigation pane, choose **Events**.

   1. Select the volume.

   1. Choose **Actions**, **Enable I/O**.

------
#### [ AWS CLI ]

**To enable I/O for a volume**  
Use the [enable-volume-io](https://docs.aws.amazon.com/cli/latest/reference/ec2/enable-volume-io.html) command.

   ```
   aws ec2 enable-volume-io --volume-id vol-01234567890abcdef
   ```

------
#### [ PowerShell ]

**To enable I/O for a volume**  
Use the [Enable-EC2VolumeIO](https://docs.aws.amazon.com/powershell/latest/reference/items/Enable-EC2VolumeIO.html) cmdlet.

   ```
   Enable-EC2VolumeIO -VolumeId vol-01234567890abcdef
   ```

------

1. Check the data on the volume.

   1. Run the **fsck** (Linux instances) or **chkdsk** (Windows instances) command.

   1. (Optional) Review any available application or system logs for relevant error messages.

   1. If the volume has been impaired for more than 20 minutes, you can contact the AWS Support Center. Choose **Troubleshoot**, and then in the **Troubleshoot Status Checks** dialog box, choose **Contact Support** to submit a support case.

## Option 2: Perform a consistency check on the volume using another instance


Use the following procedure to check the volume outside your production environment.

**Important**  
This procedure may cause the loss of write I/Os that were suspended when volume I/O was disabled.

**To perform a consistency check on a volume in isolation**

1. Stop any applications from using the volume.

1. Detach the volume from the instance. For more information, see [Detach an Amazon EBS volume from an Amazon EC2 instance](ebs-detaching-volume.md).

1. Enable I/O on the volume. Use one of the following methods.

------
#### [ Console ]

**To enable I/O for a volume**

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. In the navigation pane, choose **Events**.

   1. Select the volume that you detached in the previous step.

   1. Choose **Actions**, **Enable I/O**.

------
#### [ AWS CLI ]

**To enable I/O for a volume**  
Use the [enable-volume-io](https://docs.aws.amazon.com/cli/latest/reference/ec2/enable-volume-io.html) command.

   ```
   aws ec2 enable-volume-io --volume-id vol-01234567890abcdef
   ```

------
#### [ PowerShell ]

**To enable I/O for a volume**  
Use the [Enable-EC2VolumeIO](https://docs.aws.amazon.com/powershell/latest/reference/items/Enable-EC2VolumeIO.html) cmdlet.

   ```
   Enable-EC2VolumeIO -VolumeId vol-01234567890abcdef
   ```

------

1. Attach the volume to another instance. For more information, see [Launch your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html) and [Attach an Amazon EBS volume to an Amazon EC2 instance](ebs-attaching-volume.md).

1. Check the data on the volume.

   1. Run the **fsck** (Linux instances) or **chkdsk** (Windows instances) command.

   1. (Optional) Review any available application or system logs for relevant error messages.

   1. If the volume has been impaired for more than 20 minutes, you can contact the AWS Support Center. Choose **Troubleshoot**, and then in the troubleshooting dialog box, choose **Contact Support** to submit a support case.

## Option 3: Delete the volume if you no longer need it


If you want to remove the volume from your environment, simply delete it. For information about deleting a volume, see [Delete an Amazon EBS volume](ebs-deleting-volume.md).

If you have a recent snapshot that backs up the data on the volume, you can create a new volume from the snapshot. For more information, see [Create an Amazon EBS volume](ebs-creating-volume.md).

# Auto-enable I/O for impaired Amazon EBS volumes
Auto-enable I/O

When Amazon EBS determines that a volume's data is potentially inconsistent, it disables I/O to the volume from any attached EC2 instances by default. This causes the volume status check to fail, and creates a volume status event that indicates the cause of the failure. If the consistency of a particular volume is not a concern, and you prefer that the volume be made available immediately if it's **impaired**, you can override the default behavior by configuring the volume to automatically enable I/O. If you enable the **Auto-Enabled IO** volume attribute (`autoEnableIO` in the API), I/O between the volume and the instance is automatically re-enabled and the volume's status check will pass. In addition, you'll see an event that lets you know that the volume was in a potentially inconsistent state, but that its I/O was automatically enabled. When this event occurs, you should check the volume's consistency and replace it if necessary. For more information, see [Amazon EBS volume events](monitoring-vol-events.md).

------
#### [ Console ]

**To view the Auto-Enabled IO attribute of a volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**. 

1. Select the volume and choose the **Status checks** tab.

   The **Auto-enabled I/O** field displays the current setting (**Enabled** or **Disabled**) for the selected volume.

**To modify the Auto-Enabled IO attribute of a volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**. 

1. Select the volume and choose **Actions**, **Manage auto-enabled I/O**.

1. To automatically enable I/O for an impaired volume, select the **Auto-enable I/O for impaired volumes** check box. To disable the feature, clear the check box.

1. Choose **Update**.

------
#### [ AWS CLI ]

**To view the autoEnableIO attribute of a volume**  
Use the [describe-volume-attribute](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volume-attribute.html) command.

```
aws ec2 describe-volume-attribute \
    --attribute autoEnableIO \
    --volume-id vol-01234567890abcdef
```

The following is example output.

```
{
    "AutoEnableIO": {
        "Value": true
    },
    "VolumeId": "vol-01234567890abcdef"
}
```

**To modify the autoEnableIO attribute of a volume**  
Use the [modify-volume-attribute](https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-volume-attribute.html) command.

```
aws ec2 modify-volume-attribute \
    --auto-enable-io \
    --volume-id vol-01234567890abcdef
```

------
#### [ PowerShell ]

**To view the autoEnableIO attribute of a volume**  
Use the [Get-EC2VolumeAttribute](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2VolumeAttribute.html) cmdlet.

```
(Get-EC2VolumeAttribute `
    -Attribute autoEnableIO `
    -VolumeId vol-01234567890abcdef).AutoEnableIO
```

The following is example output.

```
True
```

**To modify the autoEnableIO attribute of a volume**  
Use the [Edit-EC2VolumeAttribute](https://docs.aws.amazon.com/powershell/latest/reference/items/Edit-EC2VolumeAttribute.html) cmdlet.

```
Edit-EC2VolumeAttribute `
    -AutoEnableIO $true `
    -VolumeId vol-01234567890abcdef
```

------

# Fault testing on Amazon EBS
Fault testing

AWS Fault Injection Service (AWS FIS) is a fully managed service that helps you perform fault injection experiments on your AWS workloads. With EBS actions in AWS FIS, you can test how your applications respond to storage faults that can result in I/O interruptions and degraded performance on your volumes. This controlled testing environment enables you to observe how your applications respond to disruptions so you can identify weaknesses in your architecture and improve the overall resilience of your applications. Using the pause I/O action and the latency injection action, you can test your monitoring and recovery mechanisms such as Amazon CloudWatch alarms and failover workflows, and improve the resiliency of your mission-critical applications to storage faults. For more information about AWS FIS, see the [AWS Fault Injection Service User Guide](https://docs.aws.amazon.com/fis/latest/userguide/what-is.html).

## Available experiments


Amazon EBS currently supports two AWS FIS fault injections:
+ [Pause I/O fault injection](ebs-fis-pause-io.md)
+ [Latency injection](ebs-fis-latency-injection.md)

## Considerations


The following considerations apply:
+ All Amazon EBS volume types are supported. Both root volumes and data volumes are supported. Instance store volumes are not supported.
+ Volumes must be attached to [ Nitro-based EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#instance-hypervisor-type).
+ Your volumes will resume their original I/O performance once the experiment completes based on the duration. You can also stop a running experiment before it completes. Alternatively, you can create a stop condition to stop the experiment if it reaches a threshold that you define in a CloudWatch alarm.
+ You can use AWS FIS with Multi-Attach enabled volumes. All of the attached instances are impacted. You can't select a specific volume-instance attachment for experiments.
+ FIS is currently not available in Local Zones, Outposts, or Wavelength Zones.
+ You can test up to 5 volumes in the same Availability Zone simultaneously when specifying volume ARNs in the console.
+ You can't use AWS FIS with volumes created on an Outpost, in an AWS Wavelength Zone, or in a Local Zone.

# Pause I/O fault injection


Use AWS Fault Injection Service and the Pause I/O action to temporarily stop I/O between an Amazon EBS volume and the instances to which it is attached to test how your workloads handle I/O interruptions. 

For more information about AWS FIS, see the [https://docs.aws.amazon.com/fis/latest/userguide/what-is.html](https://docs.aws.amazon.com/fis/latest/userguide/what-is.html).

**Considerations**

Keep in mind the following considerations for pausing volume I/O:
+ Pause I/O is supported on all [Nitro-based instance types](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html).
+ To test your OS timeout configuration, set the experiment duration equal to or greater than the value specified for `nvme_core.io_timeout`. For more information, see [NVMe I/O operation timeout for Amazon EBS volumes](timeout-nvme-ebs-volumes.md).
+ If you drive I/O to a volume that has I/O paused, the following happens:
  + The volume's status transitions to `impaired` within 120 seconds. For more information, see [Amazon EBS volume status checks](monitoring-volume-checks.md).
  + The CloudWatch metric for `VolumeStalledIOCheck` will be `1` if volume I/O is paused for over 60 seconds. For more information see [Metrics for Amazon EBS volumes](using_cloudwatch_ebs.md#ebs-volume-metrics).
  + The CloudWatch metrics for queue length (`VolumeQueueLength`) will be non-zero. Any alarms or monitoring should monitor for a non-zero queue depth.
  + The CloudWatch metrics for `VolumeReadOps` or `VolumeWriteOps` will be `0`, which indicates that the volume is no longer processing I/O.

You can perform a basic experiment from the Amazon EC2 console, or you can perform more advanced experiments using the AWS FIS console. For more information about performing advanced experiments using the AWS FIS console, see [ Tutorials for AWS FIS](https://docs.aws.amazon.com/fis/latest/userguide/fis-tutorials.html) in the *AWS Fault Injection Service User Guide*.

**To perform a basic experiment using the Amazon EC2 console**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes.**

1. Select the volume for which to pause I/O and choose **Actions**, **Fault injection**, **Pause volume I/O**.

1. For **Duration**, enter the duration for which to pause I/O between the volume and the instances. The field next to the Duration dropdown list shows the duration in ISO 8601 format.

1. In the **Service access** section, select the IAM service role for AWS FIS to assume to perform the experiment. You can use either the default role, or an existing role that you created. For more information, see [Create an IAM role for AWS FIS experiments](https://docs.aws.amazon.com/fis/latest/userguide/getting-started-iam-service-role.html).

1. Choose **Pause volume I/O**. When prompted, enter `start` in the confirmation field and choose **Start experiment**.

1. Monitor the progress and impact of your experiment. For more information, see [Monitoring AWS FIS](https://docs.aws.amazon.com/fis/latest/userguide/monitoring-experiments.html) in the *AWS FIS User Guide*.

# Latency injection


Use the Latency Injection action (`aws:ebs:volume-io-latency`) in AWS FIS to simulate elevated I/O latency on your Amazon EBS volumes to test how your applications respond to storage performance degradation. This action allows you to specify the latency value to be injected as well as the percentage of I/O that will be impacted on the target volume. With AWS FIS, you can use pre-configured latency experiment templates to get started with testing different I/O latency patterns that may be observed during storage faults. These templates are designed as an initial set of scenarios you can use to introduce disruptions to your applications to test resiliency. They are not designed to encompass all types of impact your applications can experience in the real world. We recommend that you to adapt them to run multiple different tests based on the performance needs of your applications. You can customize the available templates or create new experiment templates to test for your application specific requirements.

**Pre-configured latency experiment templates**  
Amazon EBS provides the following latency experiment templates through the EBS Console and the [AWS FIS scenario library](https://docs.aws.amazon.com/fis/latest/userguide/scenario-library-scenarios.html). You can directly use these templates on your target volumes to run a latency injection experiment.
+ **Sustained Latency** — Simulates constant latency. This experiment utilizes one latency injection action and has a total duration of 15 minutes. This experiment simulates persistent latency on 50 percent of read I/O and 100 percent of write I/O: 500 ms for 15 minutes.
+ **Increasing Latency** — Simulates gradually increasing latency. This experiment utilizes five latency injection actions and has a total duration of 15 minutes. This experiment will simulate a gradual increase in latency on 10 percent of read I/O and 25 percent of write I/O: 50 ms for 3 minutes, 200 ms for 3 minutes, 700 ms for 3 minutes, 1 second for 3 minutes, and 15 seconds for 3 minutes.
+ **Intermittent Latency** — Simulates sharp intermittent latency spikes with periods of recovery in between. This experiment utilizes three latency injection actions and has a total duration of 15 minutes. This experiment will simulate three latency spikes on 0.1 percent of read and write I/O: 30 second spike that lasts for 1 minute, 10 second spike that lasts for 2 minutes, and 20 second spike that lasts for 2 minutes. There will be 5 minute periods of recovery between each latency spike. 
+ **Decreasing Latency** — Simulates gradually decreasing latency. This experiment utilizes five latency injection actions and has a total duration of 15 minutes. This experiment will simulate a gradual decrease in latency on 10 percent of read I/O and write I/O: 20 seconds for 3 minutes, 5 seconds for 3 minutes, 900 ms for 3 minutes, 300 ms for 3 minutes, and 40 ms for 3 minutes.

**Customize preconfigured scenarios**

You customize the preconfigured templates above or create your own new experiment templates using the following customizable parameters.
+ `readIOPercentage` — Percentage of read I/O operations that latency will be injected on. This is the percentage of all read I/O operations on the volume that will be impacted by the action.

  Range: Min 0.1%, Max 100%
+ `readIOLatencyMilliseconds` — Amount of latency injected on read I/O operations. This is the latency value that will be observed on the specified percentage of the read I/O during the experiment.

  Range: Min 1 ms (io2) / 10 ms (non-io2), Max 60 seconds
+ `writeIOPercentage` — Percentage of write I/O operations that latency will be injected on. This is the percentage of all write I/O operations on the volume that will be impacted by the action.

  Range: Min 0.1%, Max 100%
+ `writeIOLatencyMilliseconds` — Amount of latency injected on write I/O operations. This is the latency value that will be observed on the specified percentage of the write I/O during the experiment.

  Range: Min 1ms (io2) / 10ms (non-io2), Max 60 seconds
+ `duration` — Duration for which the latency will be injected on the percentage of I/O selected.

  Range: Min 1 second, Max 12 hours

**Monitoring latency injection**  
You can monitor the performance impact on your volumes in the following ways:
+ Use average latency metrics in CloudWatch to get per-minute average I/O latency. For more information, see [ Monitor your EBS volumes using CloudWatch](https://docs.aws.amazon.com/ebs/latest/userguide/using_cloudwatch_ebs.html).
+ Use EBS detailed performance statistics available through NVMe-CLI, CloudWatch agent, and Prometheus to get per-second average I/O latency. The detailed metrics also provide I/O latency histograms that you can use to analyze latency variance on your volumes. For more information, see [ NVMe detailed performance statistics](https://docs.aws.amazon.com/ebs/latest/userguide/nvme-detailed-performance-stats.html).
+ Use the [Amazon EBS volume status checks](monitoring-volume-checks.md). When you inject I/O latency, the volume's status transitions to the `warning` state.

**Considerations**  
Consider the following when using EBS latency injection:
+ Latency injection is supported on all [Nitro-based instance types](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html), except: P4d, P5, P5e, Trn2u, G6, G6f, Gr6, Gr6f, M8i, M8i-flex, C8i-flex, R8i, R8i-flex, I8ge, Mac-m4pro, and Mac-m4.
+ You might see up to 5 percent variance in the latency value specified in the experiment and the resultant latency observed.
+ If you drive a very small number of I/O operations, the percentage of I/O specified in the action parameters might not match the actual percentage of I/O impacted by the action.

**To run a latency injection experiment on an Amazon EBS volume**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Volumes**.

1. Select the volumes on which to run the experiment and choose **Actions**, **Resilience testing**, **Inject volume I/O latency**.

   The AWS Fault Injection Service console opens. 

1. In the **Create experiment** window, select the type of experiment to run: **Intermittent**, **Increasing**, **Sustained**, or **Decreasing**.

1. For **IAM role selection**, choose **Create a new role** to create a new role that AWS FIS will use to conduct the experiments on your behalf. Alternatively, choose **Use an existing IAM role** if you previously created an IAM role with the required permissions.

1. The **Pricing estimate** section gives you an estimate of the cost of running the experiment. With AWS FIS, you are charged per minute that an action runs, from start to finish, based on the number of target accounts for your experiment.

1. Choose **Start experiment**.