

 **This page is only for existing customers of the Amazon Glacier service using Vaults and the original REST API from 2012.**

If you're looking for archival storage solutions, we recommend using the Amazon Glacier storage classes in Amazon S3, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. To learn more about these storage options, see [Amazon Glacier storage classes](https://aws.amazon.com/s3/storage-classes/glacier/).

Amazon Glacier (original standalone vault-based service) is no longer accepting new customers. Amazon Glacier is a standalone service with its own APIs that stores data in vaults and is distinct from Amazon S3 and the Amazon S3 Glacier storage classes. Your existing data will remain secure and accessible in Amazon Glacier indefinitely. No migration is required. For low-cost, long-term archival storage, AWS recommends the [Amazon S3 Glacier storage classes](https://aws.amazon.com/s3/storage-classes/glacier/), which deliver a superior customer experience with S3 bucket-based APIs, full AWS Region availability, lower costs, and AWS service integration. If you want enhanced capabilities, consider migrating to Amazon S3 Glacier storage classes by using our [AWS Solutions Guidance for transferring data from Amazon Glacier vaults to Amazon S3 Glacier storage classes](https://aws.amazon.com/solutions/guidance/data-transfer-from-amazon-s3-glacier-vaults-to-amazon-s3/).

# Uploading an Archive in Amazon Glacier
Uploading an Archive

Amazon Glacier (Amazon Glacier) provides a management console, which you can use to create and delete vaults. However, you cannot upload archives to Amazon Glacier by using the management console. To upload data, such as photos, videos, and other documents, you must either use the AWS CLI or write code to make requests, by using either the REST API directly or by using the Amazon SDKs. 

For information about using Amazon Glacier with the AWS CLI, go to [AWS CLI Reference for Amazon Glacier](http://docs.aws.amazon.com/cli/latest/reference/glacier/index.html). To install the AWS CLI, go to [AWS Command Line Interface](http://aws.amazon.com/cli/). The following **Uploading** topics describe how to upload archives to Amazon Glacier by using the Amazon SDK for Java, the Amazon SDK for .NET, and the REST API.

**Topics**
+ [

## Options for Uploading an Archive to Amazon Glacier
](#uploading-an-archive-overview)
+ [

# Uploading an Archive in a Single Operation
](uploading-archive-single-operation.md)
+ [

# Uploading Large Archives in Parts (Multipart Upload)
](uploading-archive-mpu.md)

## Options for Uploading an Archive to Amazon Glacier
Options for Uploading an Archive

Depending on the size of the data you are uploading, Amazon Glacier offers the following options: 
+ **Upload archives in a single operation** – In a single operation, you can upload archives from 1 byte to up to 4 GB in size. However, we encourage Amazon Glacier customers to use multipart upload to upload archives greater than 100 MB. For more information, see [Uploading an Archive in a Single Operation](uploading-archive-single-operation.md).
+ **Upload archives in parts** – Using the multipart upload API, you can upload large archives, up to about 40,000 GB (10,000 \$1 4 GB). 

  The multipart upload API call is designed to improve the upload experience for larger archives. You can upload archives in parts. These parts can be uploaded independently, in any order, and in parallel. If a part upload fails, you only need to upload that part again and not the entire archive. You can use multipart upload for archives from 1 byte to about 40,000 GB in size. For more information, see [Uploading Large Archives in Parts (Multipart Upload)](uploading-archive-mpu.md).

**Important**  
The Amazon Glacier vault inventory is only updated once a day. When you upload an archive, you will not immediately see the new archive added to your vault (in the console or in your downloaded vault inventory list) until the vault inventory has been updated.

### Using the AWS Snowball Edge Service


AWS Snowball Edge accelerates moving large amounts of data into and out of AWS using Amazon-owned devices, bypassing the internet. For more information, see [AWS Snowball Edge](http://aws.amazon.com/snowball) detail page. 

To upload existing data to Amazon Glacier (Amazon Glacier), you might consider using one of the AWS Snowball Edge device types to import data into Amazon S3, and then move it to the Amazon Glacier storage class for archival using lifecycle rules. When you transition Amazon S3 objects to the Amazon Glacier storage class, Amazon S3 internally uses Amazon Glacier for durable storage at lower cost. Although the objects are stored in Amazon Glacier, they remain Amazon S3 objects that you manage in Amazon S3, and you cannot access them directly through Amazon Glacier.

For more information about Amazon S3 lifecycle configuration and transitioning objects to the Amazon Glacier storage class, see [Object Lifecycle Management](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) and [Transitioning Objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html) in the *Amazon Simple Storage Service User Guide*.

# Uploading an Archive in a Single Operation


As described in [Uploading an Archive in Amazon Glacier](uploading-an-archive.md), you can upload smaller archives in a single operation. However, we encourage Amazon Glacier (Amazon Glacier) customers to use Multipart Upload to upload archives greater than 100 MB. 

**Topics**
+ [

# Uploading an Archive in a Single Operation Using the AWS Command Line Interface
](uploading-an-archive-single-op-using-cli.md)
+ [

# Uploading an Archive in a Single Operation Using the AWS SDK for Java
](uploading-an-archive-single-op-using-java.md)
+ [

# Uploading an Archive in a Single Operation Using the AWS SDK for .NET in Amazon Glacier
](uploading-an-archive-single-op-using-dotnet.md)
+ [

# Uploading an Archive in a Single Operation Using the REST API
](uploading-an-archive-single-op-using-rest.md)

# Uploading an Archive in a Single Operation Using the AWS Command Line Interface
Uploading an Archive in a Single Operation Using the AWS CLI

You can upload an archive in Amazon Glacier (Amazon Glacier) using the AWS Command Line Interface (AWS CLI).

**Topics**
+ [

## (Prerequisite) Setting Up the AWS CLI
](#Creating-Vaults-CLI-Setup)
+ [

## Example: Upload an Archive Using the AWS CLI
](#Uploading-Archives-CLI-Implementation)

## (Prerequisite) Setting Up the AWS CLI


1. Download and configure the AWS CLI. For instructions, see the following topics in the *AWS Command Line Interface User Guide*: 

    [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) 

   [Configuring the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html)

1. Verify your AWS CLI setup by entering the following commands at the command prompt. These commands don't provide credentials explicitly, so the credentials of the default profile are used.
   + Try using the help command.

     ```
     aws help
     ```
   + To get a list of Amazon Glacier vaults on the configured account, use the `list-vaults` command. Replace *123456789012* with your AWS account ID.

     ```
     aws glacier list-vaults --account-id 123456789012
     ```
   + To see the current configuration data for the AWS CLI, use the `aws configure list` command.

     ```
     aws configure list
     ```

## Example: Upload an Archive Using the AWS CLI


In order to upload an archive you must have a vault created. For more information about creating vaults, see [Creating a Vault in Amazon Glacier](creating-vaults.md).

1. Use the `upload-archive` command to add an archive to an existing vault. In the below example replace the `vault name` and `account ID`. For the `body` parameter specify a path to the file you wish to upload.

   ```
   aws glacier upload-archive --vault-name awsexamplevault --account-id 123456789012 --body archive.zip
   ```

1.  Expected output:

   ```
   {
       "archiveId": "kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw",
       "checksum": "969fb39823836d81f0cc028195fcdbcbbe76cdde932d4646fa7de5f21e18aa67",
       "location": "/123456789012/vaults/awsexamplevault/archives/kKB7ymWJVpPSwhGP6ycSOAekp9ZYe_--zM_mw6k76ZFGEIWQX-ybtRDvc2VkPSDtfKmQrj0IRQLSGsNuDp-AJVlu2ccmDSyDUmZwKbwbpAdGATGDiB3hHO0bjbGehXTcApVud_wyDw"
   }
   ```

   When finished the command will output the archive ID, checksum, and location in Amazon Glacier. For more information about the upload-archive command, see [upload-archive](https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-archive.html) in the *AWS CLI Command Reference*.

# Uploading an Archive in a Single Operation Using the AWS SDK for Java
Uploading an Archive in a Single Operation Using Java

Both the [high-level and low-level APIs](using-aws-sdk.md) provided by the Amazon SDK for Java provide a method to upload an archive.

**Topics**
+ [

## Uploading an Archive Using the High-Level API of the AWS SDK for Java
](#uploading-an-archive-single-op-high-level-using-java)
+ [

## Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for Java
](#uploading-an-archive-single-op-low-level-using-java)

## Uploading an Archive Using the High-Level API of the AWS SDK for Java


The `ArchiveTransferManager` class of the high-level API provides the `upload` method, which you can use to upload an archive to a vault.

 

**Note**  
You can use the `upload` method to upload small or large archives. Depending on the archive size you are uploading, this method determines whether to upload it in a single operation or use the multipart upload API to upload the archive in parts.

### Example: Uploading an Archive Using the High-Level API of the AWS SDK for Java


The following Java code example uploads an archive to a vault (`examplevault`) in the US West (Oregon) Region (`us-west-2`). For a list of supported AWS Regions and endpoints, see [Accessing Amazon Glacier](amazon-glacier-accessing.md). 

For step-by-step instructions on how to run this example, see [Running Java Examples for Amazon Glacier Using Eclipse](using-aws-sdk-for-java.md#setting-up-and-testing-sdk-java). You need to update the code as shown with the name of the vault you want to upload to and the name of the file you want to upload.

**Example**  

```
import java.io.File;
import java.io.IOException;
import java.util.Date;

import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.transfer.ArchiveTransferManager;
import com.amazonaws.services.glacier.transfer.UploadResult;


public class ArchiveUploadHighLevel {
    public static String vaultName = "*** provide vault name ***";
    public static String archiveToUpload = "*** provide name of file to upload ***";
    
    public static AmazonGlacierClient client;
    
    public static void main(String[] args) throws IOException {
        
        
    	ProfileCredentialsProvider credentials = new ProfileCredentialsProvider();
    	
        client = new AmazonGlacierClient(credentials);
        client.setEndpoint("https://glacier.us-west-2.amazonaws.com/");

        try {
            ArchiveTransferManager atm = new ArchiveTransferManager(client, credentials);
            
            UploadResult result = atm.upload(vaultName, "my archive " + (new Date()), new File(archiveToUpload));
            System.out.println("Archive ID: " + result.getArchiveId());
            
        } catch (Exception e)
        {
            System.err.println(e);
        }
    }
}
```

## Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for Java


The low-level API provides methods for all the archive operations. The following are the steps to upload an archive using the AWS SDK for Java.

 

1. Create an instance of the `AmazonGlacierClient` class (the client). 

   You need to specify an AWS Region where you want to upload the archive. All operations you perform using this client apply to that AWS Region. 

1. Provide request information by creating an instance of the `UploadArchiveRequest` class.

   In addition to the data you want to upload, you need to provide a checksum (SHA-256 tree hash) of the payload, the vault name, the content length of the data, and your account ID. 

   If you don't provide an account ID, then the account ID associated with the credentials you provide to sign the request is assumed. For more information, see [Using the AWS SDK for Java with Amazon Glacier](using-aws-sdk-for-java.md). 

1. Run the `uploadArchive` method by providing the request object as a parameter. 

   In response, Amazon Glacier (Amazon Glacier) returns an archive ID of the newly uploaded archive. 

The following Java code snippet illustrates the preceding steps. 

```
AmazonGlacierClient client;

UploadArchiveRequest request = new UploadArchiveRequest()
    .withVaultName("*** provide vault name ***")
    .withChecksum(checksum)
    .withBody(new ByteArrayInputStream(body))
    .withContentLength((long)body.length);

UploadArchiveResult uploadArchiveResult = client.uploadArchive(request);

System.out.println("Location (includes ArchiveID): " + uploadArchiveResult.getLocation());
```

### Example: Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for Java


The following Java code example uses the AWS SDK for Java to upload an archive to a vault (`examplevault`). For step-by-step instructions on how to run this example, see [Running Java Examples for Amazon Glacier Using Eclipse](using-aws-sdk-for-java.md#setting-up-and-testing-sdk-java). You need to update the code as shown with the name of the vault you want to upload to and the name of the file you want to upload.

```
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;

import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.TreeHashGenerator;
import com.amazonaws.services.glacier.model.UploadArchiveRequest;
import com.amazonaws.services.glacier.model.UploadArchiveResult;
public class ArchiveUploadLowLevel {

    public static String vaultName = "*** provide vault name ****";
    public static String archiveFilePath = "*** provide to file upload ****";
    public static AmazonGlacierClient client;
    
    public static void main(String[] args) throws IOException {
    	
    	ProfileCredentialsProvider credentials = new ProfileCredentialsProvider();

        client = new AmazonGlacierClient(credentials);
        client.setEndpoint("https://glacier.us-east-1.amazonaws.com/");

        try {
            // First open file and read.
            File file = new File(archiveFilePath);
            InputStream is = new FileInputStream(file); 
            byte[] body = new byte[(int) file.length()];
            is.read(body);
                                    
            // Send request.
            UploadArchiveRequest request = new UploadArchiveRequest()
                .withVaultName(vaultName)
                .withChecksum(TreeHashGenerator.calculateTreeHash(new File(archiveFilePath))) 
                .withBody(new ByteArrayInputStream(body))
                .withContentLength((long)body.length);
            
            UploadArchiveResult uploadArchiveResult = client.uploadArchive(request);
            
            System.out.println("ArchiveID: " + uploadArchiveResult.getArchiveId());
            
        } catch (Exception e)
        {
            System.err.println("Archive not uploaded.");
            System.err.println(e);
        }
    }
}
```

# Uploading an Archive in a Single Operation Using the AWS SDK for .NET in Amazon Glacier
Uploading an Archive in a Single Operation Using .NET

Both the [high-level and low-level APIs](using-aws-sdk.md) provided by the Amazon SDK for .NET provide a method to upload an archive in a single operation.

**Topics**
+ [

## Uploading an Archive Using the High-Level API of the AWS SDK for .NET
](#uploading-an-archive-single-op-highlevel-using-dotnet)
+ [

## Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for .NET
](#uploading-an-archive-single-op-lowlevel-using-dotnet)

## Uploading an Archive Using the High-Level API of the AWS SDK for .NET


The `ArchiveTransferManager` class of the high-level API provides the `Upload` method that you can use to upload an archive to a vault. 

**Note**  
You can use the `Upload` method to upload small or large files. Depending on the file size you are uploading, this method determines whether to upload it in a single operation or use the multipart upload API to upload the file in parts.

### Example: Uploading an Archive Using the High-Level API of the AWS SDK for .NET


The following C\$1 code example uploads an archive to a vault (`examplevault`) in the US West (Oregon) Region. 

For step-by-step instructions on how to run this example, see [Running Code Examples](using-aws-sdk-for-dot-net.md#setting-up-and-testing-sdk-dotnet). You need to update the code as shown with the name of the file you want to upload.

**Example**  

```
using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;

namespace glacier.amazon.com.rproxy.govskope.ca.docsamples
{
  class ArchiveUploadHighLevel
  {
    static string vaultName = "examplevault"; 
    static string archiveToUpload = "*** Provide file name (with full path) to upload ***";

    public static void Main(string[] args)
    {
       try
      {
         var manager = new ArchiveTransferManager(Amazon.RegionEndpoint.USWest2);
         // Upload an archive.
         string archiveId = manager.Upload(vaultName, "upload archive test", archiveToUpload).ArchiveId;
         Console.WriteLine("Archive ID: (Copy and save this ID for use in other examples.) : {0}", archiveId);
         Console.WriteLine("To continue, press Enter"); 
         Console.ReadKey();
      }
      catch (AmazonGlacierException e) { Console.WriteLine(e.Message); }
      catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
      catch (Exception e) { Console.WriteLine(e.Message); }
      Console.WriteLine("To continue, press Enter");
      Console.ReadKey();
    }
  }
}
```

## Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for .NET


The low-level API provides methods for all the archive operations. The following are the steps to upload an archive using the AWS SDK for .NET.

 

1. Create an instance of the `AmazonGlacierClient` class (the client). 

   You need to specify an AWS Region where you want to upload the archive. All operations you perform using this client apply to that AWS Region. 

1. Provide request information by creating an instance of the `UploadArchiveRequest` class.

   In addition to the data you want to upload, You need to provide a checksum (SHA-256 tree hash) of the payload, the vault name, and your account ID. 

   If you don't provide an account ID, then the account ID associated with the credentials you provide to sign the request is assumed. For more information, see [Using the AWS SDK for .NET with Amazon Glacier](using-aws-sdk-for-dot-net.md). 

1. Run the `UploadArchive` method by providing the request object as a parameter. 

   In response, Amazon Glacier returns an archive ID of the newly uploaded archive. 

### Example: Uploading an Archive in a Single Operation Using the Low-Level API of the AWS SDK for .NET


The following C\$1 code example illustrates the preceding steps. The example uses the AWS SDK for .NET to upload an archive to a vault (`examplevault`). 

**Note**  
For information about the underlying REST API to upload an archive in a single request, see [Upload Archive (POST archive)](api-archive-post.md).

For step-by-step instructions on how to run this example, see [Running Code Examples](using-aws-sdk-for-dot-net.md#setting-up-and-testing-sdk-dotnet). You need to update the code as shown with the name of the file you want to upload.

**Example**  

```
using System;
using System.IO;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;

namespace glacier.amazon.com.rproxy.govskope.ca.docsamples
{
  class ArchiveUploadSingleOpLowLevel
  {
    static string vaultName       = "examplevault";
    static string archiveToUpload = "*** Provide file name (with full path) to upload ***";

    public static void Main(string[] args)
    {
      AmazonGlacierClient client;
      try
      {
         using (client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2))
        {
          Console.WriteLine("Uploading an archive.");
          string archiveId = UploadAnArchive(client);
          Console.WriteLine("Archive ID: {0}", archiveId);
        }
      }
      catch (AmazonGlacierException e) { Console.WriteLine(e.Message); }
      catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
      catch (Exception e) { Console.WriteLine(e.Message); }
      Console.WriteLine("To continue, press Enter");
      Console.ReadKey();
    }

    static string UploadAnArchive(AmazonGlacierClient client)
    {
      using (FileStream fileStream = new FileStream(archiveToUpload, FileMode.Open, FileAccess.Read))
      {
        string treeHash = TreeHashGenerator.CalculateTreeHash(fileStream);
        UploadArchiveRequest request = new UploadArchiveRequest()
        {
          VaultName = vaultName,
          Body = fileStream,
          Checksum = treeHash
        };
        UploadArchiveResponse response = client.UploadArchive(request);
        string archiveID = response.ArchiveId;
        return archiveID;
      }
    }
  }
}
```

# Uploading an Archive in a Single Operation Using the REST API
Uploading an Archive in a Single Operation Using REST

You can use the Upload Archive API call to upload an archive in a single operation. For more information, see [Upload Archive (POST archive)](api-archive-post.md).

# Uploading Large Archives in Parts (Multipart Upload)
Uploading Large Archives in Parts

**Topics**
+ [

## Multipart Upload Process
](#MPUprocess)
+ [

## Quick Facts
](#qfacts)
+ [

# Uploading Large Archives by Using the AWS CLI
](uploading-an-archive-mpu-using-cli.md)
+ [

# Uploading Large Archives in Parts Using the Amazon SDK for Java
](uploading-an-archive-mpu-using-java.md)
+ [

# Uploading Large Archives Using the AWS SDK for .NET
](uploading-an-archive-mpu-using-dotnet.md)
+ [

# Uploading Large Archives in Parts Using the REST API
](uploading-an-archive-mpu-using-rest.md)

## Multipart Upload Process


As described in [Uploading an Archive in Amazon Glacier](uploading-an-archive.md), we encourage Amazon Glacier (Amazon Glacier) customers to use Multipart Upload to upload archives greater than 100 mebibytes (MiB). 

1. **Initiate Multipart Upload** 

   When you send a request to initiate a multipart upload, Amazon Glacier returns a multipart upload ID, which is a unique identifier for your multipart upload. Any subsequent multipart upload operations require this ID. This ID doesn't expire for at least 24 hours after Amazon Glacier completes the job. 

   In your request to start a multipart upload, specify the part size in number of bytes. Each part you upload, except the last part, must be this size.
**Note**  
You don't need to know the overall archive size when using multipart uploads. This means that you can use multipart uploads in cases where you don't know the archive size when you start uploading the archive. You only need to decide the part size at the time you start the multipart upload.

   In the initiate multipart upload request, you can also provide an optional archive description. 

1. **Upload Parts**

   For each part upload request, you must include the multipart upload ID you obtained in step 1. In the request, you must also specify the content range, in bytes, identifying the position of the part in the final archive. Amazon Glacier later uses the content range information to assemble the archive in proper sequence. Because you provide the content range for each part that you upload, it determines the part's position in the final assembly of the archive, and therefore you can upload parts in any order. You can also upload parts in parallel. If you upload a new part using the same content range as a previously uploaded part, the previously uploaded part is overwritten. 

1. **Complete (or stop) Multipart Upload**

   After uploading all the archive parts, you use the complete operation. Again, you must specify the upload ID in your request. Amazon Glacier creates an archive by concatenating parts in ascending order based on the content range you provided. Amazon Glacier response to a Complete Multipart Upload request includes an archive ID for the newly created archive. If you provided an optional archive description in the Initiate Multipart Upload request, Amazon Glacier associates it with the assembled archive. After you successfully complete a multipart upload, you cannot refer to the multipart upload ID. That means you cannot access parts associated with the multipart upload ID.

   If you stop a multipart upload, you cannot upload any more parts using that multipart upload ID. All storage consumed by any parts associated with the stopped multipart upload is freed. If any part uploads were in-progress, they can still succeed or fail even after stopped.

### Additional Multipart Upload Operations


Amazon Glacier (Amazon Glacier) provides the following additional multipart upload API calls.

 
+ **List Parts**—Using this operation, you can list the parts of a specific multipart upload. It returns information about the parts that you have uploaded for a multipart upload. For each list parts request, Amazon Glacier returns information for up to 1,000 parts. If there are more parts to list for the multipart upload, the result is paginated and a marker is returned in the response at which to continue the list. You need to send additional requests to retrieve subsequent parts. Note that the returned list of parts doesn't include parts that haven't completed uploading.
+ **List Multipart Uploads**—Using this operation, you can obtain a list of multipart uploads in progress. An in-progress multipart upload is an upload that you have initiated, but have not yet completed or stopped. For each list multipart uploads request, Amazon Glacier returns up to 1,000 multipart uploads. If there are more multipart uploads to list, then the result is paginated and a marker is returned in the response at which to continue the list. You need to send additional requests to retrieve the remaining multipart uploads.

## Quick Facts


The following table provides multipart upload core specifications.


| Item | Specification | 
| --- | --- | 
| Maximum archive size | 10,000 x 4 gibibytes (GiB)  | 
| Maximum number of parts per upload | 10,000 | 
| Part size | 1 MiB to 4 GiB, last part can be < 1 MiB. You specify the size value in bytes. The part size must be a mebibyte (1024 kibibytes [KiB]) multiplied by a power of 2. For example, `1048576` (1 MiB), `2097152` (2 MiB), `4194304` (4 MiB), `8388608` (8 MiB).   | 
| Maximum number of parts returned for a list parts request | 1,000  | 
| Maximum number of multipart uploads returned in a list multipart uploads request | 1,000  | 

# Uploading Large Archives by Using the AWS CLI
Uploading Large Archives in Parts by Using AWS CLI

You can upload an archive in Amazon Glacier (Amazon Glacier) by using the AWS Command Line Interface (AWS CLI). To improve the upload experience for larger archives, Amazon Glacier provides several API operations to support multipart uploads. By using these API operations, you can upload archives in parts. These parts can be uploaded independently, in any order, and in parallel. If a part upload fails, you need to upload only that part again, not the entire archive. You can use multipart uploads for archives from 1 byte to about 40,000 gibibytes (GiB) in size. 

For more information about Amazon Glacier multipart uploads, see [Uploading Large Archives in Parts (Multipart Upload)](uploading-archive-mpu.md).

**Topics**
+ [

## (Prerequisite) Setting Up the AWS CLI
](#Creating-Vaults-CLI-Setup)
+ [

## (Prerequisite) Install Python
](#Uploading-Archives-mpu-CLI-Install-Python)
+ [

## (Prerequisite) Create an Amazon Glacier Vault
](#Uploading-Archives-mpu-CLI-Create-Vault)
+ [

## Example: Uploading Large Archives in Parts by Using the AWS CLI
](#Uploading-Archives-mpu-CLI-Implementation)

## (Prerequisite) Setting Up the AWS CLI


1. Download and configure the AWS CLI. For instructions, see the following topics in the *AWS Command Line Interface User Guide*: 

    [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) 

   [Configuring the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html)

1. Verify your AWS CLI setup by entering the following commands at the command prompt. These commands don't provide credentials explicitly, so the credentials of the default profile are used.
   + Try using the help command.

     ```
     aws help
     ```
   + To get a list of Amazon Glacier vaults on the configured account, use the `list-vaults` command. Replace *123456789012* with your AWS account ID.

     ```
     aws glacier list-vaults --account-id 123456789012
     ```
   + To see the current configuration data for the AWS CLI, use the `aws configure list` command.

     ```
     aws configure list
     ```

## (Prerequisite) Install Python


To complete a multipart upload, you must calculate the SHA256 tree hash of the archive that you're uploading. Doing so is different than calculating the SHA256 tree hash of the file that you want to upload. To calculate the SHA256 tree hash of the archive that you're uploading, you can use Java, C\$1 (with .NET), or Python. In this example, you will use Python. For instructions on using Java or C\$1, see [Computing Checksums](checksum-calculations.md). 

For more information about installing Python, see [Install or update Python](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#installation) in the *Boto3 Developer Guide*.

## (Prerequisite) Create an Amazon Glacier Vault


To use the following example, you must have at least one Amazon Glacier vault created. For more information about creating vaults, see [Creating a Vault in Amazon Glacier](creating-vaults.md).

## Example: Uploading Large Archives in Parts by Using the AWS CLI


In this example, you will create a file and use multipart upload API operations to upload this file, in parts, to Amazon Glacier.
**Important**  
Before starting this procedure, make sure that you've performed all of the prerequisite steps. To upload an archive, you must have a vault created, the AWS CLI configured, and be prepared to use Java, C\$1, or Python to calculate a SHA256 tree hash.

The following procedure uses the `initiate-multipart-upload`, `upload-multipart-part`, and `complete-multipart-upload` AWS CLI commands. 

For more detailed information about each of these commands, see [https://docs.aws.amazon.com/cli/latest/reference/glacier/initiate-multipart-upload.html](https://docs.aws.amazon.com/cli/latest/reference/glacier/initiate-multipart-upload.html), [https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-multipart-part.html](https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-multipart-part.html), and [https://docs.aws.amazon.com/cli/latest/reference/glacier/complete-multipart-upload.html](https://docs.aws.amazon.com/cli/latest/reference/glacier/complete-multipart-upload.html) in the *AWS CLI Command Reference*.

1. Use the [https://docs.aws.amazon.com/cli/latest/reference/glacier/initiate-multipart-upload.html](https://docs.aws.amazon.com/cli/latest/reference/glacier/initiate-multipart-upload.html) command to create a multipart upload resource. In your request, specify the part size in number of bytes. Each part that you upload, except the last part, will be this size. You don't need to know the overall archive size when initiating an upload. However, you will need the total size, in bytes, of each part when completing the upload on the final step.

   In the following command, replace the values for the `--vault-name` and `--account-ID` parameters with your own information. This command specifies that you will upload an archive with a part size of 1 mebibyte (MiB) (1024 x 1024 bytes) per file. Replace this `--part-size` parameter value if needed. 

   ```
   aws glacier initiate-multipart-upload --vault-name awsexamplevault --part-size 1048576 --account-id 123456789012
   ```

   Expected output:

   ```
   {
   "location": "/123456789012/vaults/awsexamplevault/multipart-uploads/uploadId",
   "uploadId": "uploadId"
   }
   ```

   When finished, the command will output the multipart upload resource's upload ID and location in Amazon Glacier. You will use this upload ID in subsequent steps.

1. For this example, you can use the following commands to create a 4.4 MiB file, split it into 1 MiB chunks, and upload each chunk. To upload your own files, you can follow a similar procedure of splitting your data into chunks and uploading each part. 

   

**Linux or macOS**  
The following command creates a 4.4 MiB file, named `file_to_upload`, on Linux or macOS.

   ```
   mkfile -n 9000b file_to_upload
   ```

**Windows**  
The following command creates a 4.4 MiB file, named `file_to_upload`, on Windows.

   ```
   fsutil file createnew file_to_upload 4608000
   ```

1. Next, you will split this file into 1 MiB chunks. 

   ```
   split -b 1048576 file_to_upload chunk
   ```

   You now have the following five chunks. The first four are 1 MiB, and the last is approximately 400 kibibytes (KiB). 

   ```
   chunkaa
   chunkab
   chunkac
   chunkad
   chunkae
   ```

1. Use the [https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-multipart-part.html](https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-multipart-part.html) command to upload a part of an archive. You can upload archive parts in any order. You can also upload them in parallel. You can upload up to 10,000 parts for a multipart upload.

   In the following command, replace the values for the `--vault-name`, `--account-ID`, and `--upload-id` parameters. The upload ID must match the ID given as output of the `initiate-multipart-upload` command. The `--range` parameter specifies that you will upload a part with a size of 1 MiB (1024 x 1024 bytes). This size must match what you specified in the `initiate-multipart-upload` command. Adjust this size value if needed. The `--body` parameter specifies the name of the part that you're uploading.

   ```
   aws glacier upload-multipart-part --body chunkaa --range='bytes 0-1048575/*' --vault-name awsexamplevault --account-id 123456789012 --upload-id upload_ID
   ```

   If successful, the command will produce output that contains the checksum for the uploaded part.

1. Run the `upload-multipart-part` command again to upload the remaining parts of your multipart upload. Update the `--range` and `–-body` parameter values for each command to match the part that you're uploading. 

   ```
   aws glacier upload-multipart-part --body chunkab --range='bytes 1048576-2097151/*' --vault-name awsexamplevault --account-id 123456789012 --upload-id upload_ID
   ```

   ```
   aws glacier upload-multipart-part --body chunkac --range='bytes 2097152-3145727/*' --vault-name awsexamplevault --account-id 123456789012 --upload-id upload_ID
   ```

   ```
   aws glacier upload-multipart-part --body chunkad --range='bytes 3145728-4194303/*' --vault-name awsexamplevault --account-id 123456789012 --upload-id upload_ID
   ```

   ```
   aws glacier upload-multipart-part --body chunkae --range='bytes 4194304-4607999/*' --vault-name awsexamplevault --account-id 123456789012 --upload-id upload_ID
   ```
**Note**  
The final command's `--range` parameter value is smaller because the final part of our upload is less than 1 MiB. If successful, each command will produce output that contains the checksum for each uploaded part.

1. Next, you will assemble the archive and finish the upload. You must include the total size and SHA256 tree hash of the archive.

   To calculate the SHA256 tree hash of the archive, you can use Java, C\$1, or Python. In this example, you will use Python. For instructions on using Java or C\$1, see [Computing Checksums](checksum-calculations.md).

   Create the Python file `checksum.py` and insert the following code. If needed, replace the name of the original file.

   ```
   from botocore.utils import calculate_tree_hash
   					
   checksum = calculate_tree_hash(open('file_to_upload', 'rb'))
   print(checksum)
   ```

1. Run `checksum.py` to calculate the SHA256 tree hash. The following hash may not match your output.

   ```
   $ python3 checksum.py
   $ 3d760edb291bfc9d90d35809243de092aea4c47b308290ad12d084f69988ae0c
   ```

1. Use the [https://docs.aws.amazon.com/cli/latest/reference/glacier/complete-multipart-upload.html](https://docs.aws.amazon.com/cli/latest/reference/glacier/complete-multipart-upload.html) command to finish the archive upload. Replace the values for the `--vault-name`, `--account-ID`, `--upload-ID`, and `--checksum` parameters. The `--archive` parameter value specifies the total size, in bytes, of the archive. This value must be the sum of all the sizes of the individual parts that you uploaded. Replace this value if needed. 

   ```
   aws glacier complete-multipart-upload --archive-size 4608000 --vault-name awsexamplevault --account-id 123456789012 --upload-id upload_ID --checksum checksum
   ```

   When finished, the command will output the archive's ID, checksum, and location in Amazon Glacier. 

# Uploading Large Archives in Parts Using the Amazon SDK for Java
Uploading Large Archives in Parts Using Java

Both the [high-level and low-level APIs](using-aws-sdk.md) provided by the Amazon SDK for Java provide a method to upload a large archive (see [Uploading an Archive in Amazon Glacier](uploading-an-archive.md)). 

 
+ The high-level API provides a method that you can use to upload archives of any size. Depending on the file you are uploading, the method either uploads an archive in a single operation or uses the multipart upload support in Amazon Glacier (Amazon Glacier) to upload the archive in parts.
+ The low-level API maps close to the underlying REST implementation. Accordingly, it provides a method to upload smaller archives in one operation and a group of methods that support multipart upload for larger archives. This section explains uploading large archives in parts using the low-level API.

For more information about the high-level and low-level APIs, see [Using the AWS SDK for Java with Amazon Glacier](using-aws-sdk-for-java.md).

**Topics**
+ [

## Uploading Large Archives in Parts Using the High-Level API of the AWS SDK for Java
](#uploading-an-archive-in-parts-highlevel-using-java)
+ [

## Upload Large Archives in Parts Using the Low-Level API of the AWS SDK for Java
](#uploading-an-archive-mpu-using-java-lowlevel)

## Uploading Large Archives in Parts Using the High-Level API of the AWS SDK for Java


You use the same methods of the high-level API to upload small or large archives. Based on the archive size, the high-level API methods decide whether to upload the archive in a single operation or use the multipart upload API provided by Amazon Glacier. For more information, see [Uploading an Archive Using the High-Level API of the AWS SDK for Java](uploading-an-archive-single-op-using-java.md#uploading-an-archive-single-op-high-level-using-java).

## Upload Large Archives in Parts Using the Low-Level API of the AWS SDK for Java


For granular control of the upload you can use the low-level API where you can configure the request and process the response. The following are the steps to upload large archives in parts using the AWS SDK for Java.

 

1. Create an instance of the `AmazonGlacierClient` class (the client). 

   You need to specify an AWS Region where you want to save the archive. All operations you perform using this client apply to that AWS Region. 

1. Initiate multipart upload by calling the `initiateMultipartUpload` method.

   You need to provide vault name in which you want to upload the archive, part size you want to use to upload archive parts, and an optional description. You provide this information by creating an instance of the `InitiateMultipartUploadRequest` class. In response, Amazon Glacier returns an upload ID.

1. Upload parts by calling the `uploadMultipartPart` method. 

   For each part you upload, You need to provide the vault name, the byte range in the final assembled archive that will be uploaded in this part, the checksum of the part data, and the upload ID. 

1. Complete multipart upload by calling the `completeMultipartUpload` method.

   You need to provide the upload ID, the checksum of the entire archive, the archive size (combined size of all parts you uploaded), and the vault name. Amazon Glacier constructs the archive from the uploaded parts and returns an archive ID.

### Example: Uploading a Large Archive in a Parts Using the AWS SDK for Java


The following Java code example uses the AWS SDK for Java to upload an archive to a vault (`examplevault`). For step-by-step instructions on how to run this example, see [Running Java Examples for Amazon Glacier Using Eclipse](using-aws-sdk-for-java.md#setting-up-and-testing-sdk-java). You need to update the code as shown with the name of the file you want to upload.

 

**Note**  
This example is valid for part sizes from 1 MB to 1 GB. However, Amazon Glacier supports part sizes up to 4 GB. 

**Example**  

```
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.security.NoSuchAlgorithmException;
import java.util.Arrays;
import java.util.Date;
import java.util.LinkedList;
import java.util.List;

import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.glacier.AmazonGlacierClient;
import com.amazonaws.services.glacier.TreeHashGenerator;
import com.amazonaws.services.glacier.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.glacier.model.CompleteMultipartUploadResult;
import com.amazonaws.services.glacier.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.glacier.model.InitiateMultipartUploadResult;
import com.amazonaws.services.glacier.model.UploadMultipartPartRequest;
import com.amazonaws.services.glacier.model.UploadMultipartPartResult;
import com.amazonaws.util.BinaryUtils;

public class ArchiveMPU {

    public static String vaultName = "examplevault";
    // This example works for part sizes up to 1 GB.
    public static String partSize = "1048576"; // 1 MB.
    public static String archiveFilePath = "*** provide archive file path ***";
    public static AmazonGlacierClient client;
    
    public static void main(String[] args) throws IOException {

    	ProfileCredentialsProvider credentials = new ProfileCredentialsProvider();

        client = new AmazonGlacierClient(credentials);
        client.setEndpoint("https://glacier.us-west-2.amazonaws.com/");

        try {
            System.out.println("Uploading an archive.");
            String uploadId = initiateMultipartUpload();
            String checksum = uploadParts(uploadId);
            String archiveId = CompleteMultiPartUpload(uploadId, checksum);
            System.out.println("Completed an archive. ArchiveId: " + archiveId);
            
        } catch (Exception e) {
            System.err.println(e);
        }

    }
    
    private static String initiateMultipartUpload() {
        // Initiate
        InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest()
            .withVaultName(vaultName)
            .withArchiveDescription("my archive " + (new Date()))
            .withPartSize(partSize);            
        
        InitiateMultipartUploadResult result = client.initiateMultipartUpload(request);
        
        System.out.println("ArchiveID: " + result.getUploadId());
        return result.getUploadId();
    }

    private static String uploadParts(String uploadId) throws AmazonServiceException, NoSuchAlgorithmException, AmazonClientException, IOException {

        int filePosition = 0;
        long currentPosition = 0;
        byte[] buffer = new byte[Integer.valueOf(partSize)];
        List<byte[]> binaryChecksums = new LinkedList<byte[]>();
        
        File file = new File(archiveFilePath);
        FileInputStream fileToUpload = new FileInputStream(file);
        String contentRange;
        int read = 0;
        while (currentPosition < file.length())
        {
            read = fileToUpload.read(buffer, filePosition, buffer.length);
            if (read == -1) { break; }
            byte[] bytesRead = Arrays.copyOf(buffer, read);

            contentRange = String.format("bytes %s-%s/*", currentPosition, currentPosition + read - 1);
            String checksum = TreeHashGenerator.calculateTreeHash(new ByteArrayInputStream(bytesRead));
            byte[] binaryChecksum = BinaryUtils.fromHex(checksum);
            binaryChecksums.add(binaryChecksum);
            System.out.println(contentRange);
                        
            //Upload part.
            UploadMultipartPartRequest partRequest = new UploadMultipartPartRequest()
            .withVaultName(vaultName)
            .withBody(new ByteArrayInputStream(bytesRead))
            .withChecksum(checksum)
            .withRange(contentRange)
            .withUploadId(uploadId);               
        
            UploadMultipartPartResult partResult = client.uploadMultipartPart(partRequest);
            System.out.println("Part uploaded, checksum: " + partResult.getChecksum());
            
            currentPosition = currentPosition + read;
        }
        fileToUpload.close();
        String checksum = TreeHashGenerator.calculateTreeHash(binaryChecksums);
        return checksum;
    }

    private static String CompleteMultiPartUpload(String uploadId, String checksum) throws NoSuchAlgorithmException, IOException {
        
        File file = new File(archiveFilePath);

        CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest()
            .withVaultName(vaultName)
            .withUploadId(uploadId)
            .withChecksum(checksum)
            .withArchiveSize(String.valueOf(file.length()));
        
        CompleteMultipartUploadResult compResult = client.completeMultipartUpload(compRequest);
        return compResult.getLocation();
    }
}
```

# Uploading Large Archives Using the AWS SDK for .NET
Uploading Large Archives in Parts Using .NET

Both the [high-level and low-level APIs](using-aws-sdk.md) provided by the Amazon SDK for .NET provide a method to upload large archives in parts (see [Uploading an Archive in Amazon Glacier](uploading-an-archive.md)). 

 
+ The high-level API provides a method that you can use to upload archives of any size. Depending on the file you are uploading, the method either uploads archive in a single operation or uses the multipart upload support in Amazon Glacier (Amazon Glacier) to upload the archive in parts.
+ The low-level API maps close to the underlying REST implementation. Accordingly, it provides a method to upload smaller archives in one operation and a group of methods that support multipart upload for larger archives. This section explains uploading large archives in parts using the low-level API.

For more information about the high-level and low-level APIs, see [Using the AWS SDK for .NET with Amazon Glacier](using-aws-sdk-for-dot-net.md).

**Topics**
+ [

## Uploading Large Archives in Parts Using the High-Level API of the AWS SDK for .NET
](#uploading-an-archive-in-parts-highlevel-using-dotnet)
+ [

## Uploading Large Archives in Parts Using the Low-Level API of the AWS SDK for .NET
](#uploading-an-archive-in-parts-lowlevel-using-dotnet)

## Uploading Large Archives in Parts Using the High-Level API of the AWS SDK for .NET


You use the same methods of the high-level API to upload small or large archives. Based on the archive size, the high-level API methods decide whether to upload the archive in a single operation or use the multipart upload API provided by Amazon Glacier. For more information, see [Uploading an Archive Using the High-Level API of the AWS SDK for .NET](uploading-an-archive-single-op-using-dotnet.md#uploading-an-archive-single-op-highlevel-using-dotnet).

## Uploading Large Archives in Parts Using the Low-Level API of the AWS SDK for .NET


For granular control of the upload, you can use the low-level API, where you can configure the request and process the response. The following are the steps to upload large archives in parts using the AWS SDK for .NET.

 

1. Create an instance of the `AmazonGlacierClient` class (the client). 

   You need to specify an AWS Region where you want to save the archive. All operations you perform using this client apply to that AWS Region. 

1. Initiate multipart upload by calling the `InitiateMultipartUpload` method.

   You need to provide the vault name to which you want to upload the archive, the part size you want to use to upload archive parts, and an optional description. You provide this information by creating an instance of the `InitiateMultipartUploadRequest` class. In response, Amazon Glacier returns an upload ID.

1. Upload parts by calling the `UploadMultipartPart` method. 

   For each part you upload, You need to provide the vault name, the byte range in the final assembled archive that will be uploaded in this part, the checksum of the part data, and the upload ID. 

1. Complete the multipart upload by calling the `CompleteMultipartUpload` method.

   You need to provide the upload ID, the checksum of the entire archive, the archive size (combined size of all parts you uploaded), and the vault name. Amazon Glacier constructs the archive from the uploaded parts and returns an archive ID.

### Example: Uploading a Large Archive in Parts Using the Amazon SDK for .NET


The following C\$1 code example uses the AWS SDK for .NET to upload an archive to a vault (`examplevault`). For step-by-step instructions on how to run this example, see [Running Code Examples](using-aws-sdk-for-dot-net.md#setting-up-and-testing-sdk-dotnet). You need to update the code as shown with the name of a file you want to upload.

**Example**  

```
using System;
using System.Collections.Generic;
using System.IO;
using Amazon.Glacier;
using Amazon.Glacier.Model;
using Amazon.Runtime;

namespace glacier.amazon.com.rproxy.govskope.ca.docsamples
{
  class ArchiveUploadMPU
  {
    static string vaultName       = "examplevault";
    static string archiveToUpload = "*** Provide file name (with full path) to upload ***";
    static long partSize          = 4194304; // 4 MB.

    public static void Main(string[] args)
    {
      AmazonGlacierClient client;
      List<string> partChecksumList = new List<string>();
      try
      {
         using (client = new AmazonGlacierClient(Amazon.RegionEndpoint.USWest2)) 
        {
          Console.WriteLine("Uploading an archive.");
          string uploadId  = InitiateMultipartUpload(client);
          partChecksumList = UploadParts(uploadId, client);
          string archiveId = CompleteMPU(uploadId, client, partChecksumList);
          Console.WriteLine("Archive ID: {0}", archiveId);
        }
        Console.WriteLine("Operations successful. To continue, press Enter");
        Console.ReadKey();
      }
      catch (AmazonGlacierException e) { Console.WriteLine(e.Message); }
      catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
      catch (Exception e) { Console.WriteLine(e.Message); }
      Console.WriteLine("To continue, press Enter");
      Console.ReadKey();
    }

    static string InitiateMultipartUpload(AmazonGlacierClient client)
    {
      InitiateMultipartUploadRequest initiateMPUrequest = new InitiateMultipartUploadRequest()
      {

        VaultName = vaultName,
        PartSize = partSize,
        ArchiveDescription = "Test doc uploaded using MPU."
      };

      InitiateMultipartUploadResponse initiateMPUresponse = client.InitiateMultipartUpload(initiateMPUrequest);

      return initiateMPUresponse.UploadId;
    }

    static List<string> UploadParts(string uploadID, AmazonGlacierClient client)
    {
      List<string> partChecksumList = new List<string>();
      long currentPosition = 0;
      var buffer = new byte[Convert.ToInt32(partSize)];

      long fileLength = new FileInfo(archiveToUpload).Length;
      using (FileStream fileToUpload = new FileStream(archiveToUpload, FileMode.Open, FileAccess.Read))
      {
        while (fileToUpload.Position < fileLength)
        {
          Stream uploadPartStream = GlacierUtils.CreatePartStream(fileToUpload, partSize);
          string checksum = TreeHashGenerator.CalculateTreeHash(uploadPartStream);
          partChecksumList.Add(checksum);
          // Upload part.
          UploadMultipartPartRequest uploadMPUrequest = new UploadMultipartPartRequest()
          {

            VaultName = vaultName,
            Body = uploadPartStream,
            Checksum = checksum,
            UploadId = uploadID
          };
          uploadMPUrequest.SetRange(currentPosition, currentPosition + uploadPartStream.Length - 1);
          client.UploadMultipartPart(uploadMPUrequest);

          currentPosition = currentPosition + uploadPartStream.Length;
        }
      }
      return partChecksumList;
    }

    static string CompleteMPU(string uploadID, AmazonGlacierClient client, List<string> partChecksumList)
    {
      long fileLength = new FileInfo(archiveToUpload).Length;
      CompleteMultipartUploadRequest completeMPUrequest = new CompleteMultipartUploadRequest()
      {
        UploadId = uploadID,
        ArchiveSize = fileLength.ToString(),
        Checksum = TreeHashGenerator.CalculateTreeHash(partChecksumList),
        VaultName = vaultName
      };

      CompleteMultipartUploadResponse completeMPUresponse = client.CompleteMultipartUpload(completeMPUrequest);
      return completeMPUresponse.ArchiveId;
    }
  }
}
```

# Uploading Large Archives in Parts Using the REST API


As described in [Uploading Large Archives in Parts (Multipart Upload)](uploading-archive-mpu.md), multipart upload refers to a set of operations that enable you to upload an archive in parts and perform related operations. For more information about these operations, see the following API reference topics:

 
+ [Initiate Multipart Upload (POST multipart-uploads)](api-multipart-initiate-upload.md)
+ [Upload Part (PUT uploadID)](api-upload-part.md)
+ [Complete Multipart Upload (POST uploadID)](api-multipart-complete-upload.md)
+ [Abort Multipart Upload (DELETE uploadID)](api-multipart-abort-upload.md)
+ [List Parts (GET uploadID)](api-multipart-list-parts.md)
+ [List Multipart Uploads (GET multipart-uploads)](api-multipart-list-uploads.md)