

# Task-Specific Models


JumpStart supports task-specific models across fifteen of the most popular problem types. Of the supported problem types, Vision and NLP-related types total thirteen. There are eight problem types that support incremental training and fine-tuning. For more information about incremental training and hyper-parameter tuning, see [SageMaker AI Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html).​ JumpStart also supports four popular algorithms for tabular data modeling.

You can search and browse models from the JumpStart landing page in Studio or Studio Classic. When you select a model, the model detail page provides information about the model, and you can train and deploy your model in a few steps. The description section describes what you can do with the model, the expected types of inputs and outputs, and the data type needed for fine-tuning your model. 

You can also programmatically utilize models with the [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html#use-prebuilt-models-with-sagemaker-jumpstart). For a list of all available models, see the [JumpStart Available Model Table](https://sagemaker.readthedocs.io/en/v2.132.0/doc_utils/pretrainedmodels.html).

The list of problem types and links to their example Jupyter notebooks are summarized in the following table.


| Problem types  | Supports inference with pre-trained models  | Trainable on a custom dataset  | Supported frameworks  | Example Notebooks  | 
| --- | --- | --- | --- | --- | 
| Image classification  | Yes  | Yes  |  PyTorch, TensorFlow  |  [Introduction to JumpStart - Image Classification](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_image_classification/Amazon_JumpStart_Image_Classification.ipynb)  | 
| Object detection  | Yes  | Yes  | PyTorch, TensorFlow, MXNet |  [Introduction to JumpStart - Object Detection](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_object_detection/Amazon_JumpStart_Object_Detection.ipynb)  | 
| Semantic segmentation  | Yes  | Yes  | MXNet  |  [Introduction to JumpStart - Semantic Segmentation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_semantic_segmentation/Amazon_JumpStart_Semantic_Segmentation.ipynb)  | 
| Instance segmentation  | Yes  | Yes  | MXNet  |  [Introduction to JumpStart - Instance Segmentation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_instance_segmentation/Amazon_JumpStart_Instance_Segmentation.ipynb)  | 
| Image embedding  | Yes  | No  | TensorFlow, MXNet |  [Introduction to JumpStart - Image Embedding](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_image_embedding/Amazon_JumpStart_Image_Embedding.ipynb)  | 
| Text classification  | Yes  | Yes  | TensorFlow |  [Introduction to JumpStart - Text Classification](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_classification/Amazon_JumpStart_Text_Classification.ipynb)  | 
| Sentence pair classification  | Yes  | Yes  | TensorFlow, Hugging Face |  [Introduction to JumpStart - Sentence Pair Classification](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_sentence_pair_classification/Amazon_JumpStart_Sentence_Pair_Classification.ipynb)  | 
| Question answering  | Yes  | Yes  | PyTorch, Hugging Face |  [Introduction to JumpStart – Question Answering](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_question_answering/Amazon_JumpStart_Question_Answering.ipynb)  | 
| Named entity recognition  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Named Entity Recognition](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_named_entity_recognition/Amazon_JumpStart_Named_Entity_Recognition.ipynb)  | 
| Text summarization  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Text Summarization](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_summarization/Amazon_JumpStart_Text_Summarization.ipynb)  | 
| Text generation  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Text Generation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_generation/Amazon_JumpStart_Text_Generation.ipynb)  | 
| Machine translation  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Machine Translation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_machine_translation/Amazon_JumpStart_Machine_Translation.ipynb)  | 
| Text embedding  | Yes  | No  | TensorFlow, MXNet |  [Introduction to JumpStart - Text Embedding](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_embedding/Amazon_JumpStart_Text_Embedding.ipynb)  | 
| Tabular classification  | Yes  | Yes  | LightGBM, CatBoost, XGBoost, AutoGluon-Tabular, TabTransformer, Linear Learner |  [Introduction to JumpStart - Tabular Classification - LightGBM, CatBoost](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/lightgbm_catboost_tabular/Amazon_Tabular_Classification_LightGBM_CatBoost.ipynb) [Introduction to JumpStart - Tabular Classification - XGBoost, Linear Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/xgboost_linear_learner_tabular/Amazon_Tabular_Classification_XGBoost_LinearLearner.ipynb) [Introduction to JumpStart - Tabular Classification - AutoGluon Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/autogluon_tabular/Amazon_Tabular_Classification_AutoGluon.ipynb) [Introduction to JumpStart - Tabular Classification - TabTransformer Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/tabtransformer_tabular/Amazon_Tabular_Classification_TabTransformer.ipynb)  | 
| Tabular regression  | Yes  | Yes  | LightGBM, CatBoost, XGBoost, AutoGluon-Tabular, TabTransformer, Linear Learner |  [Introduction to JumpStart - Tabular Regression - LightGBM, CatBoost](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/lightgbm_catboost_tabular/Amazon_Tabular_Regression_LightGBM_CatBoost.ipynb) [Introduction to JumpStart – Tabular Regression - XGBoost, Linear Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/xgboost_linear_learner_tabular/Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb) [Introduction to JumpStart – Tabular Regression - AutoGluon Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/autogluon_tabular/Amazon_Tabular_Regression_AutoGluon.ipynb) [Introduction to JumpStart – Tabular Regression - TabTransformer Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/tabtransformer_tabular/Amazon_Tabular_Regression_TabTransformer.ipynb)  | 

# Deploy a Model


When you deploy a model from JumpStart, SageMaker AI hosts the model and deploys an endpoint that you can use for inference. JumpStart also provides an example notebook that you can use to access the model after it's deployed. 

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Note**  
Fore more information on JumpStart model deployment in Studio, see [Deploy a model in Studio](jumpstart-foundation-models-use-studio-updated-deploy.md)

## Model deployment configuration


After you choose a model, the model's tab opens. In the **Deploy Model** pane, choose **Deployment Configuration** to configure your model deployment. 

 ![\[The Deploy Model pane.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy.png) 

The default instance type for deploying a model depends on the model. The instance type is the hardware that the training job runs on. In the following example, the `ml.p2.xlarge` instance is the default for this particular BERT model. 

You can also change the endpoint name, add `key;value` resource tags, activate or deactive the `jumpstart-` prefix for any JumpStart resources related to the model, and specify an Amazon S3 bucket for storing model artifacts used by your SageMaker AI endpoint.

 ![\[JumpStart Deploy Model pane with Deployment Configuration open to select its settings.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-config.png) 

Choose **Security Settings** to specify the AWS Identity and Access Management (IAM ) role, Amazon Virtual Private Cloud (Amazon VPC), and encryption keys for the model.

 ![\[JumpStart Deploy Model pane with Security Settings open to select its settings.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security.png) 

## Model deployment security


When you deploy a model with JumpStart, you can specify an IAM role, Amazon VPC, and encryption keys for the model. If you don't specify any values for these entries: The default IAM role is your Studio Classic runtime role; default encryption is used; no Amazon VPC is used.

### IAM role


You can select an IAM role that is passed as part of training jobs and hosting jobs. SageMaker AI uses this role to access training data and model artifacts. If you don't select an IAM role, SageMaker AI deploys the model using your Studio Classic runtime role. For more information about IAM roles, see [AWS Identity and Access Management for Amazon SageMaker AI](security-iam.md).

The role that you pass must have access to the resources that the model needs, and must include all of the following.
+ For training jobs: [CreateTrainingJob API: Execution Role Permissions](https://docs.aws.amazon.com//sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createtrainingjob-perms).
+ For hosting jobs: [CreateModel API: Execution Role Permissions](https://docs.aws.amazon.com//sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createmodel-perms).

**Note**  
You can scope down the Amazon S3 permissions granted in each of the following roles. Do this by using the ARN of your Amazon Simple Storage Service (Amazon S3) bucket and the JumpStart Amazon S3 bucket.  

```
[
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::jumpstart-cache-prod-<region>/*",
        "arn:aws:s3:::jumpstart-cache-prod-<region>",
        "arn:aws:s3:::<bucket>/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
           "cloudwatch:PutMetricData",
           "logs:CreateLogStream",
          "logs:PutLogEvents",
          "logs:CreateLogGroup",
          "logs:DescribeLogStreams",
          "ecr:GetAuthorizationToken"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ecr:BatchGetImage",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer"
      ],
      "Resource": [
        "*"
      ]
    },
  ]
}
```

**Find IAM role**

If you select this option, you must select an existing IAM role from the dropdown list.

 ![\[JumpStart Security Settings IAM section with Find IAM role selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-findiam.png) 

**Input IAM role**

If you select this option, you must manually enter the ARN for an existing IAM role. If your Studio Classic runtime role or Amazon VPC block the `iam:list* `call, you must use this option to use an existing IAM role.

 ![\[JumpStart Security Settings IAM section with Input IAM role selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-inputiam.png) 

### Amazon VPC


All JumpStart models run in network isolation mode. After the model container is created, no more calls can be made. You can select an Amazon VPC that is passed as part of training jobs and hosting jobs. SageMaker AI uses this Amazon VPC to push and pull resources from your Amazon S3 bucket. This Amazon VPC is different from the Amazon VPC that limits access to the public internet from your Studio Classic instance. For more information about the Studio Classic Amazon VPC, see [Connect Studio notebooks in a VPC to external resources](studio-notebooks-and-internet-access.md).

The Amazon VPC that you pass does not need access to the public internet, but it does need access to Amazon S3. The Amazon VPC endpoint for Amazon S3 must allow access to at least the following resources that the model needs.

```
{
  "Effect": "Allow",
  "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:ListMultipartUploadParts",
    "s3:ListBucket"
  ],
  "Resources": [
    "arn:aws:s3:::jumpstart-cache-prod-<region>/*",
    "arn:aws:s3:::jumpstart-cache-prod-<region>",
    "arn:aws:s3:::bucket/*"
  ]
}
```

If you do not select an Amazon VPC, no Amazon VPC is used.

**Find VPC**

If you select this option, you must select an existing Amazon VPC from the dropdown list. After you select an Amazon VPC, you must select a subnet and security group for your Amazon VPC. For more information about subnets and security groups, see [Overview of VPCs and subnets](https://docs.aws.amazon.com//vpc/latest/userguide/VPC_Subnets.html).

 ![\[JumpStart Security Settings VPC section with Find VPC selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-findvpc.png) 

**Input VPC**

If you select this option, you must manually select the subnet and security group that compose your Amazon VPC. If your Studio Classic runtime role or Amazon VPC blocks the `ec2:list*` call, you must use this option to select the subnet and security group.

 ![\[JumpStart Security Settings VPC section with Input VPC selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-inputvpc.png) 

### Encryption keys


You can select an AWS KMS key that is passed as part of training jobs and hosting jobs. SageMaker AI uses this key to encrypt the Amazon EBS volume for the container, and the repackaged model in Amazon S3 for hosting jobs and the output for training jobs. For more information about AWS KMS keys, see [AWS KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#kms_keys).

The key that you pass must trust the IAM role that you pass. If you do not specify an IAM role, the AWS KMS key must trust your Studio Classic runtime role.

If you do not select an AWS KMS key, SageMaker AI provides default encryption for the data in the Amazon EBS volume and the Amazon S3 artifacts.

**Find encryption keys**

If you select this option, you must select existing AWS KMS keys from the dropdown list.

 ![\[JumpStart Security Settings encryption section with Find encryption keys selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-findencryption.png) 

**Input encryption keys**

If you select this option, you must manually enter the AWS KMS keys. If your Studio Classic execution role or Amazon VPC block the `kms:list* `call, you must use this option to select existing AWS KMS keys.

 ![\[JumpStart Security Settings encryption section with Input encryption keys selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-inputencryption.png) 

## Configure default values for JumpStart models


You can configure default values for parameters such as IAM roles, VPCs, and KMS keys to pre-populate for JumpStart model deployment and training. After configuring default values, the Studio Classic UI automatically provides your specified security settings and tags to JumpStart models to simplify deployment and training workflows. Administrators and end-users can initialize default values specified in a configuration file in YAML format.

By default, the SageMaker Python SDK uses two configuration files: one for the administrator and one for the user. Using the admininistrator configuration file, administrators can define a set of default values. End-users can override values set in the administrator configuration file and set additional default values using the end-user configuration file. For more information, see [Default configuration file location](https://sagemaker.readthedocs.io/en/stable/overview.html#default-configuration-file-location). 

The following code sample lists the default locations of the configuration files when using the SageMaker Python SDK in Amazon SageMaker Studio Classic.

```
# Location of the admin config file
/etc/xdg/sagemaker/config.yaml

# Location of the user config file
/root/.config/sagemaker/config.yaml
```

Values specified in the user configuration file override values set in the administrator configuration file. The configuration file is unique to each user profile within an Amazon SageMaker AI domain. The user profile's Studio Classic application is directly associated with the user profile. For more information, see [Domain user profiles](domain-user-profile.md).

Administrators can optionally set configuration defaults for JumpStart model training and deployment through `JupyterServer` lifecycle configurations. For more information, see [Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic](studio-lcc-create.md).

### Default value configuration YAML file


Your configuration file should adhere to the SageMaker Python SDK [configuration file structure](https://sagemaker.readthedocs.io/en/stable/overview.html#configuration-file-structure). Note that specific fields in the `TrainingJob`, `Model`, and `EndpointConfig` configurations apply to JumpStart model training and deployment default values.

```
SchemaVersion: '1.0'
SageMaker:
  TrainingJob:
    OutputDataConfig:
      KmsKeyId: example-key-id
    ResourceConfig:
      # Training configuration - Volume encryption key
      VolumeKmsKeyId: example-key-id
    # Training configuration form - IAM role
    RoleArn: arn:aws:iam::123456789012:role/SageMakerExecutionRole
    VpcConfig:
      # Training configuration - Security groups
      SecurityGroupIds:
      - sg-1
      - sg-2
      # Training configuration - Subnets
      Subnets:
      - subnet-1
      - subnet-2
    # Training configuration - Custom resource tags
    Tags:
    - Key: Example-key
      Value: Example-value
  Model:
    EnableNetworkIsolation: true
    # Deployment configuration - IAM role
    ExecutionRoleArn: arn:aws:iam::123456789012:role/SageMakerExecutionRole
    VpcConfig:
      # Deployment configuration - Security groups
      SecurityGroupIds:
      - sg-1
      - sg-2
      # Deployment configuration - Subnets
      Subnets:
      - subnet-1
      - subnet-2
  EndpointConfig:
    AsyncInferenceConfig:
      OutputConfig:
        KmsKeyId: example-key-id
    DataCaptureConfig:
      # Deployment configuration - Volume encryption key
      KmsKeyId: example-key-id
    KmsKeyId: example-key-id
    # Deployment configuration - Custom resource tags
    Tags:
    - Key: Example-key
      Value: Example-value
```

# Fine-Tune a Model


Fine-tuning trains a pretrained model on a new dataset without training from scratch. This process, also known as transfer learning, can produce accurate models with smaller datasets and less training time. You can fine-tune a model if its card shows a **fine-tunable** attribute set to **Yes**. 

 ![\[JumpStart fine-tunable Image Classification - TensorFlow model\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-finetune-model.png) 

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Note**  
For more information on JumpStart model fine-tuning in Studio, see [Fine-tune a model in Studio](jumpstart-foundation-models-use-studio-updated-fine-tune.md)

## Fine-Tuning data source


 When you fine-tune a model, you can use the default dataset or choose your own data, which is located in an Amazon S3 bucket. 

To browse the buckets available to you, choose **Find S3 bucket**. These buckets are limited by the permissions used to set up your Studio Classic account. You can also specify an Amazon S3 URI by choosing **Enter Amazon S3 bucket location**. 

 ![\[JumpStart data source settings with default dataset selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-dataset.png) 

**Tip**  
 To find out how to format the data in your bucket, choose **Learn more**. The description section for the model has detailed information about inputs and outputs.  

 For text models: 
+  The bucket must have a data.csv file. 
+  The first column must be a unique integer for the class label. For example: `1`, `2`, `3`, `4`, `n`
+  The second column must be a string. 
+  The second column should have the corresponding text that matches the type and language for the model.  

 For vision models: 
+  The bucket must have as many subdirectories as the number of classes. 
+  Each subdirectory should contain images that belong to that class in .jpg format. 

**Note**  
 The Amazon S3 bucket must be in the same AWS Region where you're running SageMaker Studio Classic because SageMaker AI doesn't allow cross-Region requests. 

## Fine-Tuning deployment configuration


The p3 family is recommended as the fastest for deep learning training, and this is recommended for fine-tuning a model. The following chart shows the number of GPUs in each instance type. There are other available options that you can choose from, including p2 and g4 instance types. 


|  Instance type  |  GPUs  | 
| --- | --- | 
|  p3.2xlarge  |  1  | 
|  p3.8xlarge  |  4  | 
|  p3.16xlarge  |  8  | 
|  p3dn.24xlarge  |  8  | 

## Hyperparameters


You can customize the hyperparameters of the training job that are used to fine-tune the model. The hyperparameters available for each fine-tunable model differ depending on the model. For information on each available hyperparameter, reference the hyperparameters documentation for the model of your choosing in [Built-in algorithms and pretrained models in Amazon SageMaker](algos.md). For example, see [Image Classification - TensorFlow Hyperparameters](IC-TF-Hyperparameter.md) for details on the fine-tunable Image Classification - TensorFlow hyperparameters.

If you use the default dataset for text models without changing the hyperparameters, you get a nearly identical model as a result. For vision models, the default dataset is different from the dataset used to train the pretrained models, so your model is different as a result. 

The following hyperparameters are common among models: 
+ **Epochs** – One epoch is one cycle through the entire dataset. Multiple intervals complete a batch, and multiple batches eventually complete an epoch. Multiple epochs are run until the accuracy of the model reaches an acceptable level, or when the error rate drops below an acceptable level. 
+ **Learning rate** – The amount that values should be changed between epochs. As the model is refined, its internal weights are being nudged and error rates are checked to see if the model improves. A typical learning rate is 0.1 or 0.01, where 0.01 is a much smaller adjustment and could cause the training to take a long time to converge, whereas 0.1 is much larger and can cause the training to overshoot. It is one of the primary hyperparameters that you might adjust for training your model. Note that for text models, a much smaller learning rate (5e-5 for BERT) can result in a more accurate model. 
+ **Batch size** – The number of records from the dataset that is to be selected for each interval to send to the GPUs for training. 

  In an image example, you might send out 32 images per GPU, so 32 would be your batch size. If you choose an instance type with more than one GPU, the batch is divided by the number of GPUs. Suggested batch size varies depending on the data and the model that you are using. For example, how you optimize for image data differs from how you handle language data. 

  In the instance type chart in the deployment configuration section, you can see the number of GPUs per instance type. Start with a standard recommended batch size (for example, 32 for a vision model). Then, multiply this by the number of GPUs in the instance type that you selected. For example, if you're using a `p3.8xlarge`, this would be 32(batch size) multiplied by 4 (GPUs), for a total of 128, as your batch size adjusts for the number of GPUs. For a text model like BERT, try starting with a batch size of 64, and then reduce as needed. 

 

## Training output


When the fine-tuning process is complete, JumpStart provides information about the model: parent model, training job name, training job ARN, training time, and output path. The output path is where you can find your new model in an Amazon S3 bucket. The folder structure uses the model name that you provided and the model file is in an `/output` subfolder and it's always named `model.tar.gz`.  

 Example: `s3://bucket/model-name/output/model.tar.gz` 

## Configure default values for model training


You can configure default values for parameters such as IAM roles, VPCs, and KMS keys to pre-populate for JumpStart model deployment and training. For more information, see, [Configure default values for JumpStart models](jumpstart-deploy.md#jumpstart-config-defaults).

# Share Models


**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can share JumpStart models through the Studio Classic UI directly from the **Launched JumpStart assets** page using the following procedure:

1. Open Amazon SageMaker Studio Classic and choose **Launched JumpStart assets** in the **JumpStart** section of the lefthand navigation pane.

1. Select the **Training jobs** tab to view the list of your model training jobs.

1. Under the **Training jobs** list, select the training job that you want to share. This opens the training job details page. You cannot share more than one training job at a time.

1. In the header for the training job, choose **Share**, and select **Share with my organization**.

For more information about sharing models with your organization, see [Shared Models and Notebooks](jumpstart-content-sharing.md).