

AWS Snowball Edge is no longer available to new customers. New customers should explore [AWS DataSync](https://aws.amazon.com/datasync/) for online transfers, [AWS Data Transfer Terminal](https://aws.amazon.com/data-transfer-terminal/) for secure physical transfers, or AWS Partner solutions. For edge computing, explore [AWS Outposts](https://aws.amazon.com/outposts/). 

# Understanding Snowball Edge jobs
<a name="jobs"></a>

A *job* in AWS Snowball Edge is a discrete unit of work, defined when you create it in the console or the job management API. With the AWS Snowball Edge device, there are three different job types, all of which are capable of local storage and compute functionality. This functionality uses the NFS interface or the Amazon S3 interface to read and write data. It triggers Lambda functions based on Amazon S3 PUT object API actions running locally on the AWS Snowball Edge device.
+ [Jobs to import data into Amazon S3 using a Snowball Edge device](importtype.md) – The transfer of 210 TB or less of your local data copied onto a single device, and then moved into Amazon S3. For import jobs, Snowball devices and jobs have a one-to-one relationship. Each job has exactly one device associated with it. If you need to import more data, you can create new import jobs or clone existing ones. When you return a device of this job type, that data on it is imported into Amazon S3.
+ [Jobs to export data from Amazon S3 using a Snowball Edge device](exporttype.md) – The transfer of any amount of data (located in Amazon S3), copied onto any number of Snowball Edge devices, and then moved one AWS Snowball Edge device at a time into your on-premises data destination. When you create an export job, it's split into job parts. Each job part is no more than 210 TB in size, and each job part has exactly one AWS Snowball Edge device associated with it. When you return a device of this job type, it's erased.
+ [Information about using Snowball Edge devices to provide local compute and storage functionality](computetype.md) – These jobs involve one AWS Snowball Edge device, or multiple devices used in a cluster. These jobs don't start with data in their buckets like an export job, and they can't have data imported into Amazon S3 at the end like an import job. When you return a device of this job type, it's erased. With this job type, you also have the option of creating a cluster of devices. A cluster improves local storage durability and you can scale up or down with local storage capacity.

  In Regions where Lambda is not available, this job type is called *Local storage only*.

## Details about Snowball Edge jobs
<a name="jobdetails"></a>

Before creating a job, ensure the [prerequisites](snowball-prereqs.md) are met. Each job is defined by the details that you specify when it's created. The following table describes all the details of a job.


****  

| Console identifier | API identifier | Detail description | 
| --- | --- | --- | 
| Job name | Description | A name for the job, containing alphanumeric characters, spaces, and any Unicode special characters. | 
| Job type | JobType | The type of job, either import, export, or local compute and storage. | 
| Job ID | JobId | A unique 39-character label that identifies your job. The job ID appears at the bottom of the shipping label that appears on the E Ink display, and in the name of a job's manifest file. | 
| Address | AddressId | The address that the device will be shipped to. In the case of the API, this is the ID for the address data type. | 
| Created date |  CreationDate  | The date that you created this job. | 
| Shipping speed | ShippingOption | Speed options are based on region. For more information, see [Shipping speeds for Snowball Edge](mailing-storage.md#shippingspeeds). | 
| IAM role ARN | RoleARN | This Amazon Resource Name (ARN) is the AWS Identity and Access Management (IAM) role that is created during job creation with write permissions for your Amazon S3 buckets. The creation process is automatic, and the IAM role that you allow AWS Snowball Edge to assume is only used to copy your data between your S3 buckets and the Snowball. For more information, see [Permissions Required to Use the AWS Snowball Edge Console](access-control-managing-permissions.md#additional-console-required-permissions). | 
| AWS KMS key | KmsKeyARN | In AWS Snowball Edge, AWS Key Management Service (AWS KMS) encrypts the keys on each Snowball. When you create your job, you also choose or create an ARN for an AWS KMS encryption key that you own. For more information, see [AWS Key Management Service in AWS Snowball Edge](data-protection.md#kms). | 
| Snowball capacity | SnowballCapacityPreference | The storage capacity of the AWS Snowball Edge device ordered in this job. The available size depends on your AWS Region.  | 
| Storage service | N/A | The AWS storage service associated with this job, in this case Amazon S3. | 
| Resources | Resources | The AWS storage service resources associated with your job. In this case, these are the Amazon S3 buckets that your data is transferred to or from. | 
| Job type | JobType | The type of job, either import, export, or local compute and storage. | 
| Snowball type | SnowballType | The type of Snowball Edge device ordered in this job. | 
| Cluster ID | ClusterId | A unique 39-character label that identifies your cluster. | 

# Statuses of Snowball Edge jobs
<a name="jobstatuses"></a>

Each AWS Snowball Edge device job has a *status*, which changes to denote the current state of the job. This job status information doesn't reflect the health, the current processing state, or the storage used for the associated devices.

**To see the status of a job**

1. Log into the [AWS Snow Family Management Console](https://console.aws.amazon.com/snowfamily/home).

1. On the **Job dashboard**, choose the job.

1. Click on your job name within the console.

1. The Job Status pane will be located near the top and reflects the status of the job.


**AWS Snowball Edge device job statuses**  

| Console Identifier | API Identifier | Status Description | 
| --- | --- | --- | 
| Job created | New | Your job has just been created. This status is the only one during which you can cancel a job or its job parts, if the job is an export job. | 
| Preparing appliance | PreparingAppliance | AWS is preparing a device for your job. | 
| Exporting | InProgress | AWS is exporting your data from Amazon S3 onto a device. | 
| Preparing shipment | PreparingShipment | AWS is preparing to ship a device to you. The expected shipping tracking information is provided for customers in the status. | 
| In transit to you | InTransitToCustomer | The device has been shipped to the address you provided during job creation. | 
| Delivered to you | WithCustomer | The device has arrived at the address you provided during job creation. | 
| In transit to AWS | InTransitToAWS | You have shipped the device back to AWS. | 
| At sorting facility | WithAWSSortingFacility | The device for this job is at our internal sorting facility. Any additional processing for import jobs into Amazon S3 will begin soon, typically within 2 days. | 
| At AWS | WithAWS | Your shipment has arrived at AWS. If you're importing data, your import typically begins within a day of its arrival. | 
| Importing | InProgress | AWS is importing your data into Amazon Simple Storage Service (Amazon S3). | 
| Completed | Complete | Your job or a part of your job has completed successfully. | 
| Canceled | Cancelled | Your job has been canceled. | 

## Statuses of Snowball Edge cluster jobs
<a name="clusterstatuses"></a>

Each cluster has a *status*, which changes to denote the current general progress state of the cluster. Each individual node of the cluster has its own job status.

This cluster status information doesn't reflect the health, the current processing state, or the storage used for the cluster or its nodes.


| Console Identifier | API Identifier | Status Description | 
| --- | --- | --- | 
| Awaiting Quorum | AwaitingQuorum | The cluster hasn't been created yet, because there aren't enough nodes to begin processing the cluster request. For a cluster to be created, it must have at least five nodes. | 
| Pending | Pending | Your cluster has been created, and we're getting its nodes ready to ship out. You can track the status of each node with that node's job status. | 
| Delivered to you | InUse | At least one node of the cluster is at the address you provided during job creation. | 
| Completed | Complete | All the nodes of the cluster have been returned to AWS. | 
| Canceled | Cancelled | The request to make a cluster was canceled. Cluster requests can only be canceled before they enter the Pending state. | 

# Jobs to import data into Amazon S3 using a Snowball Edge device
<a name="importtype"></a>

With an import job, your data is copied to the AWS Snowball Edge device with the built-in Amazon S3 adapter or NFS mount point. Your data source for an import job should be on-premises. In other words, the storage devices that hold the data to be transferred should be physically located at the address that you provided when you created the job.

When you import files, each file becomes an object in Amazon S3 and each directory becomes a prefix. If you import data into an existing bucket, any existing objects with the same names as newly imported objects are overwritten. The import job type is also capable of local storage and compute functionality. This functionality uses the NFS interface or Amazon S3 adapter to read and write data, and triggers Lambda functions based off of Amazon S3 PUT object API actions running locally on the AWS Snowball Edge device.

When all of your data has been imported into the specified Amazon S3 buckets in the AWS Cloud, AWS performs a complete erasure of the device. This erasure follows the NIST 800-88 standards.

After your import is complete, you can download a job report. This report alerts you to any objects that failed the import process. You can find additional information in the success and failure logs.

**Important**  
Don't delete your local copies of the transferred data until you can verify the results of the job completion report and review your import logs.

# Jobs to export data from Amazon S3 using a Snowball Edge device
<a name="exporttype"></a>

**Note**  
Tags and metadata are NOT currently supported, in other words, all tags and metadata would be removed when exporting objects from S3 buckets.

Your data source for an export job is one or more Amazon S3 buckets. After the data for a job part is moved from Amazon S3 to an AWS Snowball Edge device, you can download a job report. This report alerts you to any objects that failed the transfer to the device. You can find more information in your job's success and failure logs.

You can export any number of objects for each export job, using as many devices as it takes to complete the transfer. Each AWS Snowball Edge device for an export job's job parts is delivered one after another, with subsequent devices shipping to you after the previous job part enters the **In transit to AWS** status.

When you copy objects into your on-premises data destination from a device using the Amazon S3 adapter or the NFS mount point, those objects are saved as files. If you copy objects into a location that already holds files, any existing files with the same names are overwritten. The export job type is also capable of local storage and compute functionality. This functionality uses the NFS interface or Amazon S3 adapter to read and write data, and triggers Lambda functions based off of Amazon S3 PUT object API actions running locally on the AWS Snowball Edge device.

When AWS receives a returned device, we completely erase it, following the NIST 800-88 standards.

**Important**  
Data you want to export to a Snow device must be in Amazon S3. Any data in Amazon Glacier that you plan to export to the Snow device will have to be thawed or moved into the S3 storage class before it can be exported. Do this before creating the Snow export job.  
Don't change, update, or delete the exported Amazon S3 objects until you can verify that all of your contents for the entire job have been copied to your on-premises data destination.

When you create an export job, you can export an entire Amazon S3 bucket or a specific range of objects keys.

## Using Amazon S3 object keys when exporting data to a Snowball Edge device
<a name="ranges"></a>

When you create an export job in the [AWS Snow Family Management Console](https://console.aws.amazon.com/snowfamily/home) or with the job management API, you can export an entire Amazon S3 bucket or a specific range of objects keys. Object key names uniquely identify objects in a bucket. If you export a range, you define the length of the range by providing either an inclusive range beginning, an inclusive range ending, or both. 

Ranges are UTF-8 binary sorted. UTF-8 binary data is sorted in the following way:
+ The numbers 0–9 come before both uppercase and lowercase English characters.
+ Uppercase English characters come before all lowercase English characters.
+ Lowercase English characters come last when sorted against uppercase English characters and numbers.
+ Special characters are sorted among the other character sets.

For more information about the specifics of UTF-8, see [UTF-8 on Wikipedia](https://en.wikipedia.org/wiki/UTF-8).

### Examples of using Amazon S3 object keys when exporting data to a Snowball Edge device
<a name="range-examples"></a>

Assume that you have a bucket containing the following objects and prefixes, sorted in UTF-8 binary order:
+ 01
+ Aardvark
+ Aardwolf
+ Aasvogel/apple
+ Aasvogel/arrow/object1
+ Aasvogel/arrow/object2
+ Aasvogel/banana
+ Aasvogel/banker/object1
+ Aasvogel/banker/object2
+ Aasvogel/cherry
+ Banana
+ Car


| Specified range beginning | Specified range ending | Objects in the range that will be exported | 
| --- | --- | --- | 
| (none) | (none) | All of the objects in your bucket | 
| (none) | Aasvogel |  01 Aardvark Aardwolf Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana Aasvogel/banker/object1 Aasvogel/banker/object2 Aasvogel/cherry  | 
| (none) | Aasvogel/banana |  01 Aardvark Aardwolf Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana | 
| Aasvogel | (none) |  Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana Aasvogel/banker/object1 Aasvogel/banker/object2 Aasvogel/cherry Banana Car | 
| Aardwolf | (none) | Aardwolf Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana Aasvogel/banker/object1 Aasvogel/banker/object2 Aasvogel/cherry Banana Car | 
| Aar | (none) | Aardvark Aardwolf Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana Aasvogel/banker/object1 Aasvogel/banker/object2 Aasvogel/cherry Banana Car | 
| car | (none) | No objects are exported, and you get an error message when you try to create the job. Note that *car* is sorted below *Car* according to UTF-8 binary values. | 
| Aar | Aarrr | Aardvark Aardwolf | 
|  Aasvogel/arrow  | Aasvogel/arrox |  Aasvogel/arrow/object1 Aasvogel/arrow/object2  | 
| Aasvogel/apple | Aasvogel/banana |  Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana  | 
| Aasvogel/apple | Aasvogel/banker |  Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana Aasvogel/banker/object1 Aasvogel/banker/object2  | 
| Aasvogel/apple | Aasvogel/cherry |  Aasvogel/apple Aasvogel/arrow/object1 Aasvogel/arrow/object2 Aasvogel/banana Aasvogel/banker/object1 Aasvogel/banker/object2 Aasvogel/cherry  | 

Assume you have these three buckets and want to copy all objects from **folder2**.
+ s3://bucket/folder1/
+ s3://bucket/folder2/
+ s3://bucket/folder3/


| Specified range beginning | Specified range ending | Objects in the range that will be exported | 
| --- | --- | --- | 
| folder2/ | folder2/ | All of the objects in bucket folder2. | 

## Best practices for jobs exporting data from Amazon S3 to a Snowball Edge device
<a name="export-jobs-best-practices"></a>
+ Ensure data is in Amazon S3, batch small files before ordering the job
+ Ensure key ranges are specified in the export job definition if you have millions of objects in your bucket
+ Update object keys to remove slash in the name as objects with trailing slashes in their names (/ or \$1) are not transferred to Snowball Edge
+ For S3 buckets, the object length limitation is 255 characters.
+ For S3 buckets that are version‐enabled, only the current version of objects are exported.
+ Delete markers are not exported.

# Information about using Snowball Edge devices to provide local compute and storage functionality
<a name="computetype"></a>

Local compute and storage jobs enable you to use Amazon S3 compatible storage on Snowball Edge locally, without an internet connection. You can't export data from Amazon S3 to the device or import data into Amazon S3 when the device is returned. 

**Topics**
+ [Information about jobs to store data locally on Snowball Edge devices](#aboutstorage)
+ [Information about jobs providing local storage on a cluster of Snowball Edge devices](#clusteroption)

## Information about jobs to store data locally on Snowball Edge devices
<a name="aboutstorage"></a>

You can read and write objects to an AWS Snowball Edge device using Amazon S3 compatible storage on Snowball Edge or the S3 adapter. When you order a device, if you choose to use the S3 adapter, you also choose which Amazon S3 buckets will be included on the device when you receive it. If you choose to use Amazon S3 compatible storage on Snowball Edge, no Amazon S3 buckets are included on the device when you receive it.

You can create Amazon S3 buckets on the Snowball Edge devices to store and retrieve objects on premises for applications that require local data access, local data processing, and data residency. Amazon S3 compatible storage on Snowball Edge provides a new storage class, `SNOW`, which uses the Amazon S3 APIs, and is designed to store data durably and redundantly across multiple Snowball Edge devices. You can use the same APIs and features on Snowball Edge buckets that you do on Amazon S3, including bucket lifecycle policies, encryption, and tagging. When the device or devices are returned to AWS, all data created or stored in Amazon S3 compatible storage on Snowball Edge is erased. For more information, see [Local Compute and Storage Only Jobs](https://docs.aws.amazon.com/snowball/latest/developer-guide/computetype.html).

For more information, see [Amazon S3 compatible storage on Snowball Edge](https://docs.aws.amazon.com/snowball/latest/developer-guide/s3compatible-on-snow.html) in this guide.

When you've finished using the device, return it to AWS, and the device will be erased. This erasure follows the National Institute of Standards and Technology (NIST) 800-88 standards.

## Information about jobs providing local storage on a cluster of Snowball Edge devices
<a name="clusteroption"></a>

A cluster is a logical grouping of Snowball Edge devices, in groups of 3 to 16 devices. A cluster is created as a single job, which offers increased durability and storage size when compared to other AWS Snowball Edge job offerings. For more information about cluster jobs, see [Clustering overview](https://docs.aws.amazon.com/snowball/latest/developer-guide/ClusterOverview.html) in this guide.