

# Preparing and importing bulk data using Amazon SageMaker AI Data Wrangler
<a name="preparing-importing-with-data-wrangler"></a>

**Important**  
As you use Data Wrangler, you incur SageMaker AI costs. For a complete list of charges and prices, see the Data Wrangler tab of [Amazon SageMaker AI pricing](https://aws.amazon.com/sagemaker/pricing/). To avoid incurring additional fees, when you are finished, shut down your Data Wrangler instance. For more information, see [Shut Down Data Wrangler](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-shut-down.html). 

After you create a dataset group, you can use Amazon SageMaker AI Data Wrangler (Data Wrangler) to import data from 40\$1 sources into an Amazon Personalize dataset. Data Wrangler is a feature of Amazon SageMaker AI Studio Classic that provides an end-to-end solution to import, prepare, transform, and analyze data. You can't use Data Wrangler to prepare and import data into an Actions dataset or Action interactions dataset.

 When you use Data Wrangler to prepare and import data, you use a data flow. A *data flow* defines a series of machine learning data prep steps, starting with importing data. Each time you add a step to your flow, Data Wrangler takes an action on your data, such as transforming it or generating a visualization. 

The following are some of the steps that you can add to your flow to prepare data for Amazon Personalize:
+ **Insights:** You can add Amazon Personalize specific insight steps to your flow. These insights can help you learn about your data and what actions you can take to improve it.
+ **Visualizations:** You can add visualization steps to generate graphs such as histograms and scatter plots. Graphs can help you discover issues in your data, such as outliers or missing values.
+ **Transformations:** You can use Amazon Personalize specific and general transformation steps to make sure your data meets Amazon Personalize requirements. The Amazon Personalize transformation helps you map your data columns to required columns depending on the Amazon Personalize dataset type.

If you need to leave Data Wrangler before importing data into Amazon Personalize, you can return to where you left off by choosing the same dataset type when you [launch Data Wrangler from the Amazon Personalize console](dw-launch-dw-from-personalize.md). Or you can access Data Wrangler directly through SageMaker AI Studio Classic.

 We recommend you import data from Data Wrangler into Amazon Personalize as follows. The transformation, visualization and analysis steps are optional, repeatable, and can be completed in any order. 

1. **[Set up permissions](dw-data-prep-minimum-permissions.md)** - Set up permissions for Amazon Personalize and SageMaker AI service roles. And set up permissions for your users.

1. **[Launch Data Wrangler in SageMaker AI Studio Classic from the Amazon Personalize console](dw-launch-dw-from-personalize.md)** - Use the Amazon Personalize console to configure a SageMaker AI domain and launch Data Wrangler in SageMaker AI Studio Classic.

1. **[Import your data into Data Wrangler](dw-import-data.md)** - Import data from 40\$1 sources into Data Wrangler. Sources include AWS services, such as Amazon Redshift, Amazon EMR, or Amazon Athena, and 3rd parties such as Snowflake or DataBricks.

1. **[Transform your data](dw-transform-data.md)** - Use Data Wrangler to transform your data to meet Amazon Personalize requirements.

1. **[Visualize and analyze your data](dw-analyze-data.md)** - Use Data Wrangler to visualize your data and analyze it through Amazon Personalize specific insights.

1. **[Process and import data into Amazon Personalize](dw-export-data.md)** - Use a SageMaker AI Studio Classic Jupyter notebook to import your processed data into Amazon Personalize.

## Additional information
<a name="dw-additional-info"></a>

The following resources provide additional information about using Amazon SageMaker AI Data Wrangler and Amazon Personalize.
+ For a tutorial that walks you through processing and transforming a sample dataset, see [Demo: Data Wrangler Titanic Dataset Walkthrough](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-getting-started.html#data-wrangler-getting-started-demo) in the *Amazon SageMaker AI Developer Guide*. This tutorial introduces the fields and functions of Data Wrangler.
+ For information on onboarding to Amazon SageMaker AI domains, see [Quick onboard to Amazon SageMaker AI Domain](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html) in the *Amazon SageMaker AI Developer Guide*.
+ For information on Amazon Personalize data requirements, see [Preparing training data for Amazon Personalize](preparing-training-data.md).

# Setting up permissions
<a name="dw-data-prep-minimum-permissions"></a>

To prepare data with Data Wrangler, you must set up the following permissions: 
+ **Create a service role for Amazon Personalize:** If you haven't already, complete the instructions in [Setting up Amazon Personalize](setup.md) to create an IAM service role for Amazon Personalize. This role must have `GetObject` and `ListBucket` permissions for the Amazon S3 buckets that store your processed data. And it must have permission to use any AWS KMS keys.

   For information about granting Amazon Personalize access to your Amazon S3 buckets, see [Giving Amazon Personalize access to Amazon S3 resources](granting-personalize-s3-access.md). For information about granting Amazon Personalize access to your AWS KMS keys, see [Giving Amazon Personalize permission to use your AWS KMS key](granting-personalize-key-access.md). 
+  **Create an administrative user with SageMaker AI permissions:** Your administrator must have full access to SageMaker AI and must be able to create a SageMaker AI domain. For more information, see [Create an Administrative User and Group](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-set-up.html#gs-account-user) in the *Amazon SageMaker AI Developer Guide*. 
+ **Create a SageMaker AI execution role:** Create a SageMaker AI execution role with access to SageMaker AI resources and Amazon Personalize data import operations. The SageMaker AI execution role must have the [https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess) policy attached. If you require more granular Data Wrangler permissions, see [Data Wrangler Security and Permissions](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-security.html#data-wrangler-security-iam-policy) in the *Amazon SageMaker AI Developer Guide*. For more information on SageMaker AI roles, see [SageMaker AI Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). 

  To grant access to Amazon Personalize data import operations, attach the following IAM policy to the SageMaker AI execution role. This policy grants the permissions required to import data into Amazon Personalize and attach a policy to your Amazon S3 bucket. And it grants `PassRole` permissions when the service is Amazon Personalize. Update the Amazon S3 `amzn-s3-demo-bucket` to the name of the Amazon S3 bucket you want to use as the destination for your formatted data after you prepare it with Data Wrangler. 

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "personalize:Create*",
                  "personalize:List*",
                  "personalize:Describe*"
              ],
              "Resource": "*"
          },
          {
              "Effect": "Allow",
              "Action": [
                  "s3:PutBucketPolicy"
              ],
              "Resource": [
                  "arn:aws:s3:::amzn-s3-demo-bucket",
                  "arn:aws:s3:::amzn-s3-demo-bucket/*"
              ]
          },
          {
              "Effect": "Allow",
              "Action": [
                  "iam:PassRole"
              ],
              "Resource": "*",
              "Condition": {
                  "StringEquals": {
                      "iam:PassedToService": "personalize.amazonaws.com"
                  }
              }
          }
      ]
  }
  ```

------

  For information on creating an IAM policy, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*. For information on attaching an IAM policy to role, see [Adding and removing IAM identity permissions ](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

# Launching Data Wrangler from Amazon Personalize
<a name="dw-launch-dw-from-personalize"></a>

To launch Data Wrangler from Amazon Personalize, you use the Amazon Personalize console to configure a SageMaker AI domain and launch Data Wrangler. 

**To launch Data Wrangler from Amazon Personalize**

1. Open the Amazon Personalize console at [https://console.aws.amazon.com/personalize/home](https://console.aws.amazon.com/personalize/home) and sign in to your account.

1. On the **Dataset groups** page, choose your dataset group.

1. In **Set up datasets** choose **Create dataset** and choose the type of dataset to create. You can't use Data Wrangler to prepare an Actions dataset or Action interactions dataset.

1. Choose **Import data using Data Wrangler** and choose **Next**.

1. For **SageMaker domain**, choose to use an existing domain or create a new one. You need a SageMaker AI Domain to access Data Wrangler in SageMaker AI Studio Classic. For information about domains and user profiles, see [SageMaker AI Domain](https://docs.aws.amazon.com/sagemaker/latest/dg/sm-domain.html) in the *Amazon SageMaker AI Developer Guide*.

1. To use an existing domain, choose a **SageMaker AI domain** and **User profile** to configure the domain.

1. To create a new domain:
   + Give the new domain a name.
   + Choose a **User profile name**.
   +  For **Execution role**, choose the role you created in [Setting up permissions](dw-data-prep-minimum-permissions.md). Or, if you have CreateRole permissions, create a new role using the role creation wizard. The role you use must have the `AmazonSageMakerFullAccess` policy attached. 

1. Choose **Next**. If you are creating a new domain, SageMaker AI starts creating your domain. This can take up to ten minutes.

1. Review the details for your SageMaker AI domain.

1. Choose **Import data with Data Wrangler**. SageMaker AI Studio Classic starts creating your environment, and when complete, the **Data flow** page of Data Wrangler in SageMaker AI Studio Classic opens in a new tab. It can take up to five minutes for SageMaker AI Studio Classic to finish creating your environment. When it finishes, you are ready to start importing data into Data Wrangler. For more information, see [Importing data into Data Wrangler](dw-import-data.md).

# Importing data into Data Wrangler
<a name="dw-import-data"></a>

 After you configure a SageMaker AI domain and launch Data Wrangler in a new tab, you are ready to import data from your source into Data Wrangler. When you use Data Wrangler to prepare data for Amazon Personalize, you import one dataset at a time. We recommend starting with an Item interactions dataset. You can't use Data Wrangler to prepare an Actions dataset or Action interactions dataset.

 You start on the **Data flow** page. The page should look similar to the following. 

![\[Depicts the Data flow page of Data Wrangler with Import data and Use sample dataset options.\]](http://docs.aws.amazon.com/personalize/latest/dg/images/dw-data-sources.png)


To start importing data, you choose **Import data** and specify your data source. Data Wrangler supports 40\$1 sources. These include AWS services, such as Amazon Redshift, Amazon EMR, or Amazon Athena, and third parties, such as Snowflake or DataBricks. Different data sources have different procedures for connecting and importing data. 

For a complete list of available sources and step-by-step instructions on importing data, see [Import](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-import.html) in the *Amazon SageMaker AI Developer Guide*. 

After you import data into Data Wrangler, you are ready to transform it. For information about transforming data, see [Transforming data](dw-transform-data.md).

# Transforming data
<a name="dw-transform-data"></a>

 To transform data in Data Wrangler, you add a **Transform** step to your data flow. Data Wrangler includes over 300 transforms that you can use to prepare your data, including a **Map columns for Amazon Personalize** transform. And you can use the general Data Wrangler transforms to fix issues such as outliers, type issues, and missing values. 

After you finish transforming your data, you can analyze it with Data Wrangler. Or, if you are finished preparing your data in Data Wrangler, you can process it and import it into Amazon Personalize. For information about analyzing data, see [Generating visualizations and data insights](dw-analyze-data.md). For information about processing and importing data, see [Processing data and importing it into Amazon Personalize](dw-export-data.md).

**Topics**
+ [Mapping columns for Amazon Personalize](#dw-personalize-transform)
+ [General Data Wrangler transforms](#dw-general-transform)

## Mapping columns for Amazon Personalize
<a name="dw-personalize-transform"></a>

 To transform your data so it meets Amazon Personalize requirements, you add the **Map columns for Amazon Personalize** transform and map your columns to the required and optional fields for Amazon Personalize.

**To use the Map columns for Amazon Personalize transform**

1.  Choose **\$1** for your latest transform and choose **Add transform**. If you haven't added a transform, choose the **\$1** for the **Data types** transform. Data Wrangler adds this transform automatically to your flow. 

1.  Choose **Add step**. 

1.  Choose **Transforms for Amazon Personalize**. The **Map columns for Amazon Personalize** transform is selected by default. 

1. Use the transform fields to map your data to required Amazon Personalize attributes.

   1. Choose the dataset type that matches your data (Interactions, Items, or Users). 

   1. Choose your domain (ECOMMERCE, VIDEO\$1ON\$1DEMAND, or custom). The domain you choose must match the domain you specified when you created your dataset group.

   1. Choose the columns that match the required and optional fields for Amazon Personalize. For example, for the item\$1ID column, choose the column in your data that stores the unique identification information for each of your items. 

      Each column field is filtered by data type. Only the columns in your data that meet Amazon Personalize data type requirements are available. If your data is not of the required type, you can use the [Parse Value as Type](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-transform.html#data-wrangler-transform-cast-type) Data Wrangler transform to convert it.

## General Data Wrangler transforms
<a name="dw-general-transform"></a>

 The following general Data Wrangler transforms can help you prepare data for Amazon Personalize: 
+ Data type conversion: If your field is not listed as a possible option in the **Map columns for Amazon Personalize** transform, you might need to convert its data type. The Data Wrangler transform [Parse Value as Type](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-transform.html#data-wrangler-transform-cast-type) can help you convert your data. Or you can use the **Data types** transform that Data Wrangler adds by default when you create a flow. To use this transform, you choose the data type from the **Type** drop-down lists, choose **Preview** and then choose **Update**.

   For information on required data types for fields, see the section for your domain and dataset type in [Creating schema JSON files for Amazon Personalize schemas](how-it-works-dataset-schema.md). 
+ Handling missing values and outliers: If you generate missing value or outlier insights, you can use the Data Wrangler transforms [Handle Outliers](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-transform.html#data-wrangler-transform-handle-outlier) and [Handle Missing Values](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-transform.html#data-wrangler-transform-handle-missing) to resolve these issues. 
+  Custom transformations: With Data Wrangler, you can create your own transformations with Python (User-Defined Function), PySpark, pandas, or PySpark (SQL). You might use a custom transform to perform tasks such as dropping duplicate columns or grouping by columns. For more information, see [Custom Transforms](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-transform.html#data-wrangler-transform-custom) in the *Amazon SageMaker AI Developer Guide*. 

# Generating visualizations and data insights
<a name="dw-analyze-data"></a>

After you import your data into Data Wrangler, you can use it to generate visualizations and data insights. 
+  **[Visualizations](#dw-visualizing-data)**: Data Wrangler can generate different types of graphs, such as histograms and scatter plots. For example, you can generate a histogram to identify outliers in your data. 
+ **[Data insights](#dw-generating-insights)**: You can use a *Data Quality and Insights Report for Amazon Personalize* to learn about your data through data insights and column and row statistics. This report can let you know if you have any type issues in your data. And you can learn what actions you can take to improve your data. These actions can help you meet Amazon Personalize resource requirements, such as model training requirements, or they can lead to improved recommendations.

 After you learn about your data through visualizations and insights, you can use this information to help you apply additional transforms to improve your data. Or, if you are finished preparing your data, you can process it and import it into Amazon Personalize. For information about transforming your data, see [Transforming data](dw-transform-data.md). For information about processing and importing data, see [Processing data and importing it into Amazon Personalize](dw-export-data.md). 

## Generating visualizations
<a name="dw-visualizing-data"></a>

You can use Data Wrangler to create different types of graphs, such as histograms and scatter plots. For example, you can generate a histogram to identify outliers in your data. To generate a data visualization, you add an **Analysis** step to your flow and, from **Analysis type**, choose the visualization you want to create. 

 For more information about creating visualizations in Data Wrangler, see [Analyze and Visualize](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-analyses.html) in the *Amazon SageMaker AI Developer Guide*. 

## Generating data insights
<a name="dw-generating-insights"></a>

 You can use Data Wrangler to generate a **Data Quality and Insights Report for Amazon Personalize** report specific to your dataset type. Before generating the report, we recommend that you transform your data to meet Amazon Personalize requirements. This will lead to more relevant insights. For more information, see [Transforming data](dw-transform-data.md). 

**Topics**
+ [Report content](#dw-report-content)
+ [Generating the report](#dw-generating-insight-report)

### Report content
<a name="dw-report-content"></a>

The **Data Quality and Insights Report for Amazon Personalize** includes the following sections: 
+ **Summary:** The report summary includes dataset statistics and high priority warnings:
  + **Dataset statistics:** These include Amazon Personalize specific statistics, such as the number of unique users in your interactions data, and general statistics, such as the number of missing values or outliers.
  +  **High priority warnings:** These are Amazon Personalize specific insights that have the most impact on training or recommendations. Each warning includes a recommended action that you can take to resolve the issue. 
+  **Duplicate rows and Incomplete rows:** These sections include information on which rows have missing values and which rows are duplicated in your data. 
+  **Feature summary:** This section includes the data type for each column, invalid or missing data information, and warning counts. 
+  **Feature details:** This section includes subsections with detailed information for each of your columns of data. Each subsection includes statistics for the column, such as categorical value count, and missing value information. And each subsection includes Amazon Personalize specific insights and recommended actions for columns of data. For example, an insight might indicate that a column has more than 30 possible categories. 

#### Data type issues
<a name="dw-report-type-issues"></a>

 The report identifies columns that are not of the correct data type and specifies the required type. To get insights related to these features, you must convert the data type of the column and generate the report again. To convert the type, you can use the Data Wrangler transform [Parse Value as Type](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-transform.html#data-wrangler-transform-cast-type). 

#### Amazon Personalize insights
<a name="dw-report-insights"></a>

The Amazon Personalize insights include a finding and a suggested action. The action is optional. For example, the report might include an insight and action related to the number of categories for a column of categorical data. If you don't believe the column is a categorical, you can disregard this insight and take no action.

 Except for minor wording differences, the Amazon Personalize specific insights are the same as the *single dataset* insights you might generate when you analyze your data with Amazon Personalize. For example, the insights report in Data Wrangler includes insights such as "The Item interactions dataset has only X unique users with two or more interactions." But it doesn't include insights like "X% of items in the *Items dataset* have no interactions in the *Item interactions dataset*."

 For a list of possible Amazon Personalize specific insights, see the insights that don't reference multiple datasets in [Data insights](analyzing-data.md#data-insights).

#### Report examples
<a name="dw-insight-report-examples"></a>

The look and feel of the Amazon Personalize report is the same as the general insights report in Data Wrangler. For examples of the general insights report, see [Get Insights On Data and Data Quality](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-data-insights.html) in the *Amazon SageMaker AI Developer Guide*. The following example shows how the summary section of a report for an Item interactions dataset. It includes dataset statistics and some possible high priority Item interactions dataset warnings.

![\[Depicts the summary section of a report for an Item interactions dataset.\]](http://docs.aws.amazon.com/personalize/latest/dg/images/dw-reports-summary.png)


 The following example shows how the feature details section for an EVENT\$1TYPE column of an Item interactions dataset might appear in a report. 

![\[Depicts the feature details section for an EVENT_TYPE column of an Item interactions dataset.\]](http://docs.aws.amazon.com/personalize/latest/dg/images/dw-event-type-report.png)


### Generating the report
<a name="dw-generating-insight-report"></a>

To generate the **Data Quality and Insights Report for Amazon Personalize**, you choose **Get data insights** for your transform and create an analysis.

**To generate Data Quality and Insights Report for Amazon Personalize**

1. Choose the **\$1** option for the transform you are analyzing. If you haven't added a transform, choose the **\$1** for the **Data types** transform. Data Wrangler adds this transform automatically to your flow. 

1. Choose **Get data insights**. The **Create analysis** panel displays.

1. For **Analysis type**, choose **Data Quality and Insights Report for Amazon Personalize**. 

1.  For **Dataset type**, choose the type of Amazon Personalize dataset you are analyzing. 

1. Optionally choose **Run on full data**. By default, Data Wrangler generates insights on only a sample of your data. 

1. Choose **Create**. When analysis completes, the report appears. 

# Processing data and importing it into Amazon Personalize
<a name="dw-export-data"></a>

 When you are finished analyzing and transforming your data, you are ready to process it and import it into Amazon Personalize. 
+  **[Processing data](#dw-process-data)** – Processing the data applies your transform to your entire dataset and outputs it to a destination you specify. In this case you specify an Amazon S3 bucket. 
+ **[Importing data into Amazon Personalize](#dw-import-into-personalize)** – To import processed data into Amazon Personalize, you run a Jupyter Notebook provided in SageMaker AI Studio Classic. This notebook creates your Amazon Personalize datasets and imports your data into them. 

## Processing data
<a name="dw-process-data"></a>

 Before you import data into Amazon Personalize, you must apply your transform to your entire dataset and output it to an Amazon S3 bucket. To do this, you create a destination node with the destination set to an Amazon S3 bucket, and then launch a processing job for the transformation.

For step-by-step instructions on specifying a destination and launching a process job, see [Launch processing jobs with a few clicks using Amazon SageMaker AI Data Wrangler ](https://aws.amazon.com/blogs/machine-learning/launch-processing-jobs-with-a-few-clicks-using-amazon-sagemaker-data-wrangler/). When you add a destination, choose **Amazon S3**. You will use this location when you import the processed data into Amazon Personalize.

When you finish processing your data, you are ready to import it from the Amazon S3 bucket into Amazon Personalize.

## Importing data into Amazon Personalize
<a name="dw-import-into-personalize"></a>

After you process your data, you are ready to import it into Amazon Personalize. To import processed data into Amazon Personalize, you run a Jupyter Notebook provided in SageMaker AI Studio Classic. This notebook creates your Amazon Personalize datasets and imports your data into them.

**To import processed data into Amazon Personalize**

1. For the transformation you want to export, choose **Export to** and choose **Amazon Personalize (via Jupyter Notebook)**.

1. Modify the notebook to specify the Amazon S3 bucket you used as the data destination for the processing job. Optionally specify the domain for your dataset group. By default, the notebook creates a custom dataset group.

1. Review the notebook cells that create the schema. Verify that the schema fields have the expected types and attributes before running the cell. 
   +  Verify that fields that support null data have `null` listed in the list of types. The following example shows how to add `null` for a field. 

     ```
     {
       "name": "GENDER",
       "type": [
         "null",
         "string"
       ],
       "categorical": true
     }
     ```
   +  Verify that categorical fields have the categorical attribute set to true. The following example shows how to mark a field categorical. 

     ```
     {
               "name": "SUBSCRIPTION_MODEL",
               "type": "string",
               "categorical": true
     }
     ```
   + Verify that textual fields have the textual attribute set to true. The following example shows how to mark a field as textual.

     ```
     {
           "name": "DESCRIPTION",
           "type": [
             "null",
             "string"
           ],
           "textual": true
     }
     ```

1. Run the notebook to create a schema, and create dataset, and import your data into the Amazon Personalize dataset. You run the notebook just as you would a notebook outside of SageMaker AI Studio Classic. For information on running Jupyter notebooks, see [Running Code](https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Running Code.html). For information on notebooks in SageMaker AI Studio Classic, see [Use Amazon SageMaker AI Notebooks](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks.html) in the *Amazon SageMaker AI Developer Guide*.

    After you complete the notebook, if you imported interactions data, you are ready to create recommenders or custom resources. Or you can repeat the process with an items dataset or users dataset.
   + For information about creating a domain recommenders, see [Domain recommenders in Amazon Personalize](creating-recommenders.md). 
   + For information about creating and deploying custom resources, see [Custom resources for training and deploying Amazon Personalize models](create-custom-resources.md).