

# Managing configurations


You can edit your JupyterLab configurations on the JupyterLab page by choosing Configure in the top right corner. A popup appears where you can change the instance type. You can also increase the EBS volume up to 16 GB if allowed by your admin.

1. Navigate to Amazon SageMaker Unified Studio using the URL from your admin and log in using your SSO or AWS credentials. 

1. Expand the **Build** menu in the top navigation, then choose **JupyterLab**.

1. Choose the Configure button in the top right corner of the page. A popup appears where you can change the instance type and increase the EBS volume.

1. Specify the instance type and EBS volume that you want for testing.

   1. NOTE: After you increase the EBS volume, you cannot decrease it.

# Configuring Spark compute


Amazon SageMaker Unified Studio provides a set of Jupyter magic commands. Magic commands, or magics, enhance the functionality of the IPython environment. For more information about the magics that Amazon SageMaker Unified Studio provides, run `%help` in a notebook.

Compute-specific configurations can be set by using the `%%configure` Jupyter magic. The `%%configure` magic takes a JSON-formatted dictionary. To use %%configure magic, specify the compute name in the argument `-n`. Including `-f` will restart the session to forcefully apply the new configuration. Otherwise, this configuration will apply when the next session starts. 

For example: `%%configure -n compute_name -f`.

# Library management


You can use the library management widget in JupyterLab to manage the library installations and configurations in your notebook.

To navigate to the library management of a notebook in Amazon SageMaker Unified Studio, complete the following steps:

1. Navigate to Amazon SageMaker Unified Studio using the URL from your admin and log in using your SSO or AWS credentials. 

1. Navigate to a project. You can do this by choosing **Browse all projects** from the center menu and then selecting a project, or by creating a new project.

1. From the **Build** menu, choose **JupyterLab**.

1. Navigate to a notebook or create a new one by selecting **File** > **New** > **Notebook**.

1. Choose the library management icon from the notebook navigation bar.  
![\[The Amazon SageMaker Unified Studio JupyterLab library icon.\]](http://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/images/library-icon.png)

The following library configurations are available:

## Jar

+ **Maven artifacts**
+ **S3 paths**
+ **Disk location paths**
+ **Other paths**

## Python

+ **Conda packages**
+ **PyPI packages**
+ **S3 paths**
+ **Disk location paths**
+ **Other paths**

## Adding JupyterLab library configurations




1. Navigate to the JupyterLab library management page.

1. Select the configuration method you would like to add from the left navigation of the library management page.

1. Choose **Add**.

1. Input the URL, package name, coordinates, or other information as the fields indicate.

1. In the left navigation of the library management page, check the box **Apply the change to JupyterLab**.

1. Choose **Save all changes**.

# Compute-specific configuration


 Amazon SageMaker Unified Studio provides a set of Jupyter magic commands. Magic commands, or magics, enhance the functionality of the IPython environment. For more information about the magics that Amazon SageMaker Unified Studio provides, run %help in a notebook. 

 Compute-specific configurations can be configured by %%configure Jupyter magic. The %%configure magic takes a json-formatted dictionary. To use %%configure magic, please specify the compute name in the argument -n. Include —f will restart the session to forcefully apply the new configuration, otherwise this configuration will apply only when next session starts. 

## Configure an EMR Spark session


 When working with EMR on EC2 or EMR Serverless, %%configure command can be used to configure the Spark session creation parameters. Using conf settings, you can configure any Spark configuration that's mentioned in the configuration documentation for Apache Spark. 

```
%%configure -n compute_name -f 
{ 
    "conf": { 
        "spark.sql.shuffle.partitions": "36"
     }
}
```

## Configure a Glue interactive session


Use the `--` prefix for run arguments specified for Glue. 

```
%%configure -n project.spark.compatibility -f
{
   "––enable-auto-scaling": "true"
   "--enable-glue-datacatalog": "false"
}
```

For more information on job parameters, see Job parameters.

You can update Spark configuration via %%configure when working with Glue with --conf in configure magic. You can configure any Spark configuration that's mentioned in the configuration documentation for Apache Spark. 

```
%%configure -n project.spark.compatibility -f 
{ 
    "--conf": "spark.sql.shuffle.partitions=36" 
}
```