

# SageMaker JumpStart pretrained models
<a name="studio-jumpstart"></a>

Amazon SageMaker JumpStart provides pretrained, open-source models for a wide range of problem types to help you get started with machine learning. You can incrementally train and tune these models before deployment. JumpStart also provides solution templates that set up infrastructure for common use cases, and executable example notebooks for machine learning with SageMaker AI.

You can deploy, fine-tune, and evaluate pretrained models from popular models hubs through the Models landing page in the updated Studio experience.

You can also access pretrained models, solution templates, and examples through the Models landing page in Amazon SageMaker Studio Classic. 

The following steps show how to access JumpStart models using Amazon SageMaker Studio and Amazon SageMaker Studio Classic.

You can also access JumpStart models using the SageMaker Python SDK. For information about how to use JumpStart models programmatically, see [Use SageMaker JumpStart Algorithms with Pretrained Models](https://sagemaker.readthedocs.io/en/stable/overview.html#use-sagemaker-jumpstart-algorithms-with-pretrained-models).

## Open JumpStart in Studio
<a name="jumpstart-open-studio"></a>

In Amazon SageMaker Studio, open the Models landing page either through the **Home** page or the **Models** item in the left-side panel. This opens the **SageMaker Models** landing page where you can explore models in the SageMakerPublicHub, models in Private Hubs or Curated Hubs, and customized models.
+ From the **Home** page, choose **Explore models** in the **Start your model customization workflow** pane. 
+ From the menu in the left panel, navigate to the **Models** node.

For more information on getting started with Amazon SageMaker Studio, see [Amazon SageMaker Studio](studio-updated.md).

![\[Amazon SageMaker Studio interface with access to JumpStart.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-studio-nav.png)


## Use JumpStart in Studio
<a name="jumpstart-use-studio"></a>

**Important**  
Before downloading or using third-party content: You are responsible for reviewing and complying with any applicable license terms and making sure that they are acceptable for your use case.  
On 3/13/2026, we delisted a few models from the JumpStart catalog across regions to improve discoverability and focus on high-quality, well-supported options. Existing endpoints for delisted models will remain functional. For license information on delisted open-weight models, please refer to the Hugging Face listing of the respective models.

From the **SageMaker Models** landing page in Studio, you can explore JumpStart base models from both proprietary and publicly available model providers. You can search directly for models, filter by specific model provider, or filter based on a list of provided use cases and actions.

![\[Amazon SageMaker Studio Models landing page.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-studio-landing.png)


Choose a model to see its model detail card. In the upper right-hand corner of the model detail card, choose **Fine-tune**, **Customize**, **Deploy**, or **Evaluate** to start working through the fine-tuning, deployment, or evaluation workflows, respectively. Note that not all models are available for customization, fine-tuning or evaluation. For more information on each of these options, see [Use foundation models in Studio](jumpstart-foundation-models-use-studio-updated.md).

You can also access **Private or Curated Hub** models through a dedicated tab. These work exactly like JumpStart base models, and clicking on a model card will take you to the details page, where actions are available.

Additionally, select **My models** to access your fine-tuned and registered models. Outputs from customization jobs can be found here, under the **Logged** models tab. **Deployable** models can also be found here.

## Open and use JumpStart in Studio Classic
<a name="jumpstart-open-use"></a>

The following sections give information on how to open, use, and manage JumpStart from the Amazon SageMaker Studio Classic UI.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

### Open JumpStart in Studio Classic
<a name="jumpstart-open"></a>

In Amazon SageMaker Studio Classic, open the JumpStart landing page either through the **Home** page or the **Home** menu on the left-side panel. 
+ From the **Home** page you can either:
  + Choose **JumpStart** in the **Prebuilt and automated solutions** pane. This opens the **SageMaker JumpStart** landing page.
  + Choose a model directly in the **SageMaker JumpStart** landing page, or choose the **Explore All** option to see available solutions or models of a specific type. 
+ From the **Home** menu in the left panel you can either:
  + Navigate to the **SageMaker JumpStart** node, then choose **Models, notebooks, solutions**. This opens the **SageMaker JumpStart** landing page.
  + Navigate to the **JumpStart** node, then choose **Launched JumpStart assets**.

    The **Launched JumpStart assets** page lists your currently launched solutions, deployed model endpoints, and training jobs created with JumpStart. You can access the JumpStart landing page from this tab by clicking on the **Browse JumpStart** button at the top right of the tab.

The JumpStart landing page lists available end-to-end machine learning solutions, pretrained models, and example notebooks. From any individual solution or model page, you can choose the **Browse JumpStart** button (![\[Button labeled "Browse JumpStart" with an icon indicating a browsing action.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-browse-button.png)) at the top right of the tab to return to the **SageMaker JumpStart** page.

![\[SageMaker Studio Classic interface with access to JumpStart.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-assets.png)


**Important**  
Before downloading or using third-party content: You are responsible for reviewing and complying with any applicable license terms and making sure that they are acceptable for your use case.  
On 3/13/2026, we delisted a few models from the JumpStart catalog across regions to improve discoverability and focus on high-quality, well-supported options. Existing endpoints for delisted models will remain functional. For license information on delisted open-weight models, please refer to the Hugging Face listing of the respective models.

### Use JumpStart in Studio Classic
<a name="jumpstart-using"></a>

From the **SageMaker JumpStart** landing page, you can browse for solutions, models, notebooks, and other resources.

![\[SageMaker Studio Classic JumpStart landing page.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-use.png)


You can find JumpStart resources by using the search bar, or by browsing each category. Use the tabs to filter the available solutions by categories:
+  **Solutions** – In one step, launch comprehensive machine learning solutions that tie SageMaker AI to other AWS services. Select **Explore All Solutions** to view all available solutions.
+  **Resources** – Use example notebooks, blogs, and video tutorials to learn and head start your problem types.
  +  **Blogs** – Read details and solutions from machine learning experts. 
  +  **Video tutorials** – Watch video tutorials for SageMaker AI features and machine learning use cases from machine learning experts.
  +  **Example notebooks** – Run example notebooks that use SageMaker AI features like Spot Instance training and experiments over a large variety of model types and use cases. 
+  **Data types** – Find a model by data type (e.g., Vision, Text, Tabular, Audio, Text Generation). Select **Explore All Models** to view all available models.
+  **ML tasks** – Find a model by problem type (e.g., Image Classification, Image Embedding, Object Detection, Text Generation). Select **Explore All Models** to view all available models.
+  **Notebooks** – Find example notebooks that use SageMaker AI features across multiple model types and use cases. Select **Explore All Notebooks** to view all available example notebooks.
+  **Frameworks** – Find a model by framework (e.g., PyTorch, TensorFlow, Hugging Face).

### Manage JumpStart in Studio Classic
<a name="jumpstart-managing"></a>

From the **Home** menu in the left panel, navigate to **SageMaker JumpStart**, then choose **Launched JumpStart assets** to list your currently launched solutions, deployed model endpoints, and training jobs created with JumpStart.

**Topics**
+ [Open JumpStart in Studio](#jumpstart-open-studio)
+ [Use JumpStart in Studio](#jumpstart-use-studio)
+ [Open and use JumpStart in Studio Classic](#jumpstart-open-use)
+ [Amazon SageMaker JumpStart Foundation Models](jumpstart-foundation-models.md)
+ [Private curated hubs for foundation model access control in JumpStart](jumpstart-curated-hubs.md)
+ [Amazon SageMaker JumpStart in Studio Classic](jumpstart-studio-classic.md)

# Amazon SageMaker JumpStart Foundation Models
<a name="jumpstart-foundation-models"></a>

Amazon SageMaker JumpStart offers state-of-the-art foundation models for use cases such as content writing, code generation, question answering, copywriting, summarization, classification, information retrieval, and more. Use JumpStart foundation models to build your own generative AI solutions and integrate custom solutions with additional SageMaker AI features. For more information, see [Getting started with Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart/getting-started/).

A foundation model is a large pre-trained model that is adaptable to many downstream tasks and often serves as the starting point for developing more specialized models. Examples of foundation models include LLaMa-3-70b, BLOOM 176B, FLAN-T5 XL, or GPT-J 6B, which are pre-trained on massive amounts of text data and can be fine-tuned for specific language tasks. 

Amazon SageMaker JumpStart onboards and maintains publicly available foundation models for you to access, customize, and integrate into your machine learning lifecycles. For more information, see [Publicly available foundation models](jumpstart-foundation-models-latest.md#jumpstart-foundation-models-latest-publicly-available). Amazon SageMaker JumpStart also includes proprietary foundation models from third-party providers. For more information, see [Proprietary foundation models](jumpstart-foundation-models-latest.md#jumpstart-foundation-models-latest-proprietary).

To get started exploring and experimenting with available models, see [JumpStart foundation model usage](jumpstart-foundation-models-use.md). All foundation models are available to use programmatically with the SageMaker Python SDK. For more information, see [Use foundation models with the SageMaker Python SDK](jumpstart-foundation-models-use-python-sdk.md).

For more information on considerations to make when choosing a model, see [Model sources and license agreements](jumpstart-foundation-models-choose.md).

For specifics about customization and fine-tuning foundation models, see [Foundation model customization](jumpstart-foundation-models-customize.md). 

For more general information on foundation models, see the paper [On the Opportunities and Risks of Foundation Models](https://arxiv.org/abs/2108.07258).

**Topics**
+ [Available foundation models](jumpstart-foundation-models-latest.md)
+ [JumpStart foundation model usage](jumpstart-foundation-models-use.md)
+ [Model sources and license agreements](jumpstart-foundation-models-choose.md)
+ [Foundation model customization](jumpstart-foundation-models-customize.md)
+ [Evaluate a text generation foundation model in Studio](jumpstart-foundation-models-evaluate.md)
+ [Example notebooks](jumpstart-foundation-models-example-notebooks.md)

# Available foundation models
<a name="jumpstart-foundation-models-latest"></a>

Amazon SageMaker JumpStart offers state-of-the-art, built-in publicly available and proprietary foundation models to customize and integrate into your generative AI workflows.

## Publicly available foundation models
<a name="jumpstart-foundation-models-latest-publicly-available"></a>

Amazon SageMaker JumpStart onboards and maintains open source foundation models from third-party sources. To get started with one of these publicly available models, see [JumpStart foundation model usage](jumpstart-foundation-models-use.md) or explore one of the available [Example notebooks](jumpstart-foundation-models-example-notebooks.md). In a given example notebook for a publicly available model, try switching out the model ID to experiment with different models within the same model family. 

For more information on model IDs and resources on deploying publicly available JumpStart foundation models with the SageMaker Python SDK, see [Use foundation models with the SageMaker Python SDK](jumpstart-foundation-models-use-python-sdk.md).

By definition, foundation models are adaptable to many downstream tasks. Foundation models are trained on massive amounts of general domain data and the same model can be implemented or customized for multiple use cases. When choosing your foundation model, start with defining a specific task, such as text generation or image generation. 

### Publicly available time series forecasting models
<a name="jumpstart-foundation-models-choose-task-time-series-forecasting"></a>

Time series forecasting models are designed to analyze and make predictions on sequential data over time. These models can be applied to various domains such as finance, weather forecasting, or energy demand forecasting. The Chronos models are tailored for time series forecasting tasks, enabling accurate predictions based on historical data patterns.


| Model Name | Model ID | Model Source | Fine-tunable | 
| --- | --- | --- | --- | 
| Chronos T5 Small | autogluon-forecasting-chronos-t5-small | Amazon | No | 
| Chronos T5 Base | autogluon-forecasting-chronos-t5-base | Amazon | No | 
| Chronos T5 Large | autogluon-forecasting-chronos-t5-large | Amazon | No | 
| Chronos-Bolt Small | autogluon-forecasting-chronos-bolt-small | Amazon |  No  | 
| Chronos-Bolt Base | autogluon-forecasting-chronos-bolt-base | Amazon |  No  | 

### Publicly available text generation models
<a name="jumpstart-foundation-models-choose-task-text-generation"></a>

Text generation foundation models can be used for a variety of downstream tasks, including text summarization, text classification, question answering, long-form content generation, short-form copywriting, information extraction, and more.

To explore the latest text generation JumpStart foundation models, use the **Text Generation** filter on the [Getting started with Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart/getting-started/?sagemaker-jumpstart-cards.sort-by=item.additionalFields.priority&sagemaker-jumpstart-cards.sort-order=asc&awsf.sagemaker-jumpstart-filter-product-type=product-type%23foundation-model&awsf.sagemaker-jumpstart-filter-text=ml-task-type%23text-generation&awsf.sagemaker-jumpstart-filter-vision=*all&awsf.sagemaker-jumpstart-filter-tabular=*all&awsf.sagemaker-jumpstart-filter-audio-tasks=*all&awsf.sagemaker-jumpstart-filter-multimodal=*all&awsf.sagemaker-jumpstart-filter-RL=*all&awsm.page-sagemaker-jumpstart-cards=1) product description page. You can also explore foundation models based on tasks directly in the Amazon SageMaker Studio UI or SageMaker Studio Classic UI. Only a subset of publicly available text generation models are available for fine-tuning in JumpStart. For more information, see [Use foundation models in Amazon SageMaker Studio Classic](jumpstart-foundation-models-use-studio.md).

### Publicly available image generation models
<a name="jumpstart-foundation-models-choose-task-image-generation"></a>

JumpStart provides a wide variety of Stable Diffusion image generation foundation models including base models from Stability AI as well as pre-trained models for specific text-to-image tasks from Hugging Face. If you need to fine-tune your text-to-image foundation model, you can use Stable Diffusion 2.1 base from Stability AI. If you want to explore models that are already trained on specific art styles, you can explore one of the many third-party models from Hugging Face directly in the Amazon SageMaker Studio UI or SageMaker Studio Classic UI. 

To explore the latest image generation JumpStart foundation models, use the **Text to Image** filter on the [Getting started with Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart/getting-started/?sagemaker-jumpstart-cards.sort-by=item.additionalFields.priority&sagemaker-jumpstart-cards.sort-order=asc&awsf.sagemaker-jumpstart-filter-product-type=product-type%23foundation-model&awsf.sagemaker-jumpstart-filter-text=*all&awsf.sagemaker-jumpstart-filter-vision=*all&awsf.sagemaker-jumpstart-filter-tabular=*all&awsf.sagemaker-jumpstart-filter-audio-tasks=*all&awsf.sagemaker-jumpstart-filter-multimodal=ml-task-type%23txt2img&awsf.sagemaker-jumpstart-filter-RL=*all&awsm.page-sagemaker-jumpstart-cards=1) product description page. To get started with your chosen text-to-image foundation model, see [JumpStart foundation model usage](jumpstart-foundation-models-use.md).

## Proprietary foundation models
<a name="jumpstart-foundation-models-latest-proprietary"></a>

Amazon SageMaker JumpStart provides access to proprietary foundation models from third-party providers such as [AI21 Labs](https://www.ai21.com/), [Cohere](https://cohere.com/), and [LightOn](https://www.lighton.ai/).

To get started with one of these proprietary models, see [JumpStart foundation model usage](jumpstart-foundation-models-use.md). To use a proprietary foundation model, you must first subscribe to the model in AWS Marketplace. After subscribing to the model, locate the foundation model in Studio or SageMaker Studio Classic. For more information, see [SageMaker JumpStart pretrained models](studio-jumpstart.md).

To explore the latest proprietary foundation models for a variety of use cases, see [Getting started with Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart/getting-started/?sagemaker-jumpstart-cards.sort-by=item.additionalFields.priority&sagemaker-jumpstart-cards.sort-order=asc&awsf.sagemaker-jumpstart-filter-product-type=product-type%23foundation-model&awsf.sagemaker-jumpstart-filter-text=*all&awsf.sagemaker-jumpstart-filter-vision=*all&awsf.sagemaker-jumpstart-filter-tabular=*all&awsf.sagemaker-jumpstart-filter-audio-tasks=*all&awsf.sagemaker-jumpstart-filter-multimodal=*all&awsf.sagemaker-jumpstart-filter-RL=*all&sagemaker-jumpstart-cards.q=proprietary&sagemaker-jumpstart-cards.q_operator=AND). 

# JumpStart foundation model usage
<a name="jumpstart-foundation-models-use"></a>

Choose, train, or deploy foundation models through Amazon SageMaker Studio or Amazon SageMaker Studio Classic, use JumpStart foundation models programmatically with the SageMaker Python SDK, or discover JumpStart foundation models directly through the SageMaker AI console.

**Topics**
+ [Use foundation models in Studio](jumpstart-foundation-models-use-studio-updated.md)
+ [Use foundation models in Amazon SageMaker Studio Classic](jumpstart-foundation-models-use-studio.md)
+ [Use foundation models with the SageMaker Python SDK](jumpstart-foundation-models-use-python-sdk.md)
+ [Discover foundation models in the SageMaker AI Console](jumpstart-foundation-models-use-console.md)

# Use foundation models in Studio
<a name="jumpstart-foundation-models-use-studio-updated"></a>

Amazon SageMaker Studio allows you to fine-tune, deploy, and evaluate both publicly available and proprietary JumpStart foundation models directly through the Studio UI.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

To get started, navigate to the JumpStart landing page in Amazon SageMaker Studio. You can access it from the **Home** page or the left-side panel menu. On the **JumpStart** landing page, you can explore model hubs from providers of both publicly available and proprietary models, and search for models.

Within each model hub, you can sort models by **Most likes**, **Most downloads**, **Recently updated**, or filter them by task. Choose a model to view its detail card. On the model detail card, you can choose to **Fine-tune**, **Deploy**, or **Evaluate** the model, depending on the available option. Note that not all models are available for fine-tuning or evaluation. 

For more information on getting started with Amazon SageMaker Studio, see [Amazon SageMaker Studio](studio-updated.md).

**Topics**
+ [Fine-tune a model in Studio](jumpstart-foundation-models-use-studio-updated-fine-tune.md)
+ [Deploy a model in Studio](jumpstart-foundation-models-use-studio-updated-deploy.md)
+ [Evaluate a model in Studio](jumpstart-foundation-models-use-studio-updated-evaluate.md)
+ [Use your SageMaker JumpStart Models in Amazon Bedrock](jumpstart-foundation-models-use-studio-updated-register-bedrock.md)

# Fine-tune a model in Studio
<a name="jumpstart-foundation-models-use-studio-updated-fine-tune"></a>

Fine-tuning trains a pre-trained model on a new dataset without training from scratch. This process, also known as transfer learning, can produce accurate models with smaller datasets and less training time. To fine-tune JumpStart foundation models, navigate to a model detail card in the Studio UI. For more information on how to open JumpStart in Studio, see [Open JumpStart in Studio](studio-jumpstart.md#jumpstart-open-studio). After navigating to the model detail card of your choice, choose **Train** in the upper right corner. Note that not all models have fine-tuning available.

**Important**  
Some foundation models require explicit acceptance of an end-user license agreement (EULA) before fine-tuning. For more information, see [EULA acceptance in Amazon SageMaker Studio](jumpstart-foundation-models-choose.md#jumpstart-foundation-models-choose-eula-studio).

## Model settings
<a name="jumpstart-foundation-models-use-studio-updated-fine-tune-model"></a>

When using a pre-trained JumpStart foundation model in Amazon SageMaker Studio, the **Model artifact location (Amazon S3 URI)** is populated by default. To edit the default Amazon S3 URI, choose **Enter model artifact location**. Not all models support changing the model artifact location.

## Data settings
<a name="jumpstart-foundation-models-use-studio-updated-fine-tune-data"></a>

In the **Data** field, provide an Amazon S3 URI point to your training dataset location. The default Amazon S3 URI points to an example training dataset. To edit the default Amazon S3 URI, choose **Enter training dataset** and change the URI. Be sure to review the model detail card in Amazon SageMaker Studio for information on formatting training data.

## Hyperparameters
<a name="jumpstart-foundation-models-use-studio-updated-fine-tune-hyperparameters"></a>

You can customize the hyperparameters of the training job that are used to fine-tune the model. The hyperparameters available for each fine-tunable model differ depending on the model. 

The following hyperparameters are common among models: 
+ **Epochs** – One epoch is one cycle through the entire dataset. Multiple intervals complete a batch, and multiple batches eventually complete an epoch. Multiple epochs are run until the accuracy of the model reaches an acceptable level, or when the error rate drops below an acceptable level. 
+ **Learning rate** – The amount that values should be changed between epochs. As the model is refined, its internal weights are being nudged and error rates are checked to see if the model improves. A typical learning rate is 0.1 or 0.01, where 0.01 is a much smaller adjustment and could cause the training to take a long time to converge, whereas 0.1 is much larger and can cause the training to overshoot. It is one of the primary hyperparameters that you might adjust for training your model. Note that for text models, a much smaller learning rate (5e-5 for BERT) can result in a more accurate model. 
+ **Batch size** – The number of records from the dataset that is to be selected for each interval to send to the GPUs for training. 

Review the tool tip prompts and additional information in the model detail card in the Studio UI to learn more about hyperparameters specific to the model of your choice. 

For more information on available hyperparameters, see [Commonly supported fine-tuning hyperparameters](jumpstart-foundation-models-fine-tuning.md#jumpstart-foundation-models-fine-tuning-hyperparameters).

## Deployment
<a name="jumpstart-foundation-models-use-studio-updated-fine-tune-instance"></a>

Specify the training instance type and output artifact location for your training job. You can only choose from instances that are compatible with the model of your choice within the fine-tuning the Studio UI. The default output artifact location is the SageMaker AI default bucket. To change the output artifact location, choose **Enter output artifact location** and change the Amazon S3 URI.

## Security
<a name="jumpstart-foundation-models-use-studio-updated-fine-tune-security"></a>

Specify the security settings to use for your training job, including the IAM role that SageMaker AI uses to train your model, whether your training job should connect to a virtual private cloud (VPC), and any encryption keys to secure your data.

## Additional information
<a name="jumpstart-foundation-models-use-studio-updated-fine-tune-additional-info"></a>

In the **Additional Information** field you can edit the training job name. You can also add and remove tags in the form of key-value pairs to help organize and categorize your fine-tuning training jobs. 

After providing information for your fine-tuning configuration, choose **Submit**. If the pre-trained foundation model that you chose to fine-tune requires explicit agreement of an end-user license agreement (EULA) before training, the EULA is provided in a pop-up window. To accept the terms of the EULA, choose **Accept**. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.

# Deploy a model in Studio
<a name="jumpstart-foundation-models-use-studio-updated-deploy"></a>

To deploy JumpStart foundation models, navigate to a model detail card in the Studio UI. For more information on how to open JumpStart in Studio, see [Open JumpStart in Studio](studio-jumpstart.md#jumpstart-open-studio). After navigating to the model detail page of your choice, choose **Deploy** in the upper right corner of the Studio UI. Then, follow the steps in [Deploy models with SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deploy-models.html#deploy-models-studio).

Amazon SageMaker JumpStart also offers optimized deployments, which provide pre-defined deployment configurations designed for specific use cases such as content generation, summarization, or chat-style interactions. When deploying a supported model, you can select your target use case and choose a constraint optimization — Cost optimized, Throughput optimized, Latency optimized, or Balanced — and Amazon SageMaker JumpStart automatically configures the endpoint for that scenario. This gives you visibility into key performance metrics like P50 latency, time-to-first-token (TTFT), and throughput, while ensuring the deployment is tuned for your workload. To get started, open a supported model's detail page in Studio, choose **Deploy**, and use the **Performance** panel to configure your optimized deployment.

**Important**  
Some foundation models require explicit acceptance of an end-user license agreement (EULA) before deployment. For more information, see [EULA acceptance in Amazon SageMaker Studio](jumpstart-foundation-models-choose.md#jumpstart-foundation-models-choose-eula-studio).

# Evaluate a model in Studio
<a name="jumpstart-foundation-models-use-studio-updated-evaluate"></a>

Amazon SageMaker JumpStart has integrations with SageMaker Clarify foundation model evaluations (FME) in Studio. If a JumpStart model has built-in evaluation capabilities available, you can choose **Evaluate** in the upper right corner of the model detail page in the JumpStart Studio UI. For more information, see [Evaluate a foundation model](https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-foundation-models-evaluate.html).

# Use your SageMaker JumpStart Models in Amazon Bedrock
<a name="jumpstart-foundation-models-use-studio-updated-register-bedrock"></a>

You can register the models that you've deployed from Amazon SageMaker JumpStart to Amazon Bedrock. With Amazon Bedrock, you can host your model behind multiple endpoints. You can also use Amazon Bedrock features, such as Agents and Knowledge Bases. For more information about using Amazon Bedrock's models, see [https://docs.aws.amazon.com/bedrock/latest/userguide/amazon-bedrock-marketplace.html](https://docs.aws.amazon.com/bedrock/latest/userguide/amazon-bedrock-marketplace.html).

**Important**  
To migrate your models to Amazon Bedrock, we recommend attaching [AmazonBedrockFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonBedrockFullAccess.html) policy to your IAM role. If you can't attach the managed policy, make sure your IAM role has the following permissions:  

****  

```
{
    	"Version":"2012-10-17",		 	 	 
    	"Statement": [
    		{
    			"Sid": "BedrockAll",
    			"Effect": "Allow",
    			"Action": [
    				"bedrock:*"
    			],
    			"Resource": "*"
    		},
    		{
    			"Sid": "DescribeKey",
    			"Effect": "Allow",
    			"Action": [
    				"kms:DescribeKey"
    			],
    			"Resource": "arn:*:kms:*:::*"
    		},
    		{
    			"Sid": "APIsWithAllResourceAccess",
    			"Effect": "Allow",
    			"Action": [
    				"iam:ListRoles",
    				"ec2:DescribeVpcs",
    				"ec2:DescribeSubnets",
    				"ec2:DescribeSecurityGroups"
    			],
    			"Resource": "*"
    		},
    		{
    			"Sid": "MarketplaceModelEndpointMutatingAPIs",
    			"Effect": "Allow",
    			"Action": [
    				"sagemaker:CreateEndpoint",
    				"sagemaker:CreateEndpointConfig",
    				"sagemaker:CreateModel",
    				"sagemaker:CreateInferenceComponent",
    				"sagemaker:DeleteInferenceComponent",
    				"sagemaker:DeleteEndpoint",
    				"sagemaker:UpdateEndpoint"
    			],
    			"Resource": [
    				"arn:aws:sagemaker:*:*:endpoint/*",
    				"arn:aws:sagemaker:*:*:endpoint-config/*",
    				"arn:aws:sagemaker:*:*:model/*"
    			],
    			"Condition": {
    				"StringEquals": {
    					"aws:CalledViaLast": "bedrock.amazonaws.com"
    				}
    			}
    		},
    		{
    			"Sid": "BedrockEndpointTaggingOperations",
    			"Effect": "Allow",
    			"Action": [
    				"sagemaker:AddTags",
    				"sagemaker:DeleteTags"
    			],
    			"Resource": [
    				"arn:aws:sagemaker:*:*:endpoint/*",
    				"arn:aws:sagemaker:*:*:endpoint-config/*",
    				"arn:aws:sagemaker:*:*:model/*"
    			]
    		},
    		{
    			"Sid": "MarketplaceModelEndpointNonMutatingAPIs",
    			"Effect": "Allow",
    			"Action": [
    				"sagemaker:DescribeEndpoint",
    				"sagemaker:DescribeEndpointConfig",
    				"sagemaker:DescribeModel",
    				"sagemaker:DescribeInferenceComponent",
    				"sagemaker:ListEndpoints",
    				"sagemaker:ListTags"
    			],
    			"Resource": [
    				"arn:aws:sagemaker:*:*:endpoint/*",
    				"arn:aws:sagemaker:*:*:endpoint-config/*",
    				"arn:aws:sagemaker:*:*:model/*"
    			],
    			"Condition": {
    				"StringEquals": {
    					"aws:CalledViaLast": "bedrock.amazonaws.com"
    				}
    			}
    		},
    		{
    			"Sid": "BedrockEndpointInvokingOperations",
    			"Effect": "Allow",
    			"Action": [
    				"sagemaker:InvokeEndpoint",
    				"sagemaker:InvokeEndpointWithResponseStream"
    			],
    			"Resource": [
    				"arn:aws:sagemaker:*:*:endpoint/*"
    			],
    			"Condition": {
    				"StringEquals": {
    					"aws:CalledViaLast": "bedrock.amazonaws.com"
    				}
    			}
    		},
    		{
    			"Sid": "DiscoveringMarketplaceModel",
    			"Effect": "Allow",
    			"Action": [
    				"sagemaker:DescribeHubContent"
    			],
    			"Resource": [
    				"arn:aws:sagemaker:*:aws:hub-content/SageMakerPublicHub/Model/*",
    				"arn:aws:sagemaker:*:aws:hub/SageMakerPublicHub"
    			]
    		},
    		{
    			"Sid": "AllowMarketplaceModelsListing",
    			"Effect": "Allow",
    			"Action": [
    				"sagemaker:ListHubContents"
    			],
    			"Resource": "arn:aws:sagemaker:*:aws:hub/SageMakerPublicHub"
    		},
    		{
    			"Sid": "RetrieveSubscribedMarketplaceLicenses",
    			"Effect": "Allow",
    			"Action": [
    				"license-manager:ListReceivedLicenses"
    			],
    			"Resource": [
    				"*"
    			]
    		},
    		{
    			"Sid": "PassRoleToSageMaker",
    			"Effect": "Allow",
    			"Action": [
    				"iam:PassRole"
    			],
    			"Resource": [
    				"arn:aws:iam::*:role/*Sagemaker*ForBedrock*"
    			],
    			"Condition": {
    				"StringEquals": {
    					"iam:PassedToService": [
    						"sagemaker.amazonaws.com",
    						"bedrock.amazonaws.com"
    					]
    				}
    			}
    		},
    		{
    			"Sid": "PassRoleToBedrock",
    			"Effect": "Allow",
    			"Action": [
    				"iam:PassRole"
    			],
    			"Resource": "arn:aws:iam::*:role/*AmazonBedrock*",
    			"Condition": {
    				"StringEquals": {
    					"iam:PassedToService": [
    						"bedrock.amazonaws.com"
    					]
    				}
    			}
    		}
    	]
    }
```
The Amazon Bedrock Full Access policy only provides permissions to the Amazon Bedrock API. To use Amazon Bedrock in the AWS Management Console, your IAM role must also have the following permissions:  

```
{
        "Sid": "AllowConsoleS3AccessForBedrockMarketplace",
        "Effect": "Allow",
        "Action": [
          "s3:GetObject",
          "s3:GetBucketCORS",
          "s3:ListBucket",
          "s3:ListBucketVersions",
          "s3:GetBucketLocation"
        ],
        "Resource": "*"
    }
```
If you’re writing your own policy, you must include the policy statement that allows the Amazon Bedrock Marketplace action for the resource. For example, the following policy allows Amazon Bedrock to use the `InvokeModel` operation for a model that you’ve deployed to an endpoint.  

****  

```
{
    
        "Version":"2012-10-17",		 	 	 
        "Statement": [
            {
                "Sid": "BedrockAll",
                "Effect": "Allow",
                "Action": [
                    "bedrock:InvokeModel"
                ],
                "Resource": [
                "arn:aws:bedrock:us-east-1:111122223333:marketplace/model-endpoint/all-access"
                ]
            },
            {
                "Sid": "VisualEditor1",
                "Effect": "Allow",
                "Action": ["sagemaker:InvokeEndpoint"],
                "Resource": "arn:aws:sagemaker:us-east-1:111122223333:endpoint/*",
                "Condition": {
                    "StringEquals": {
                        "aws:ResourceTag/project": "example-project-id",
                        "aws:CalledViaLast": "bedrock.amazonaws.com"
                    }
                }
            }
        ]
    
}
```

After you've deployed a model, you might be able to use it in Amazon Bedrock. To see if you can use it in Amazon Bedrock, navigate to the model detail card in the Studio UI. If the model card says that it's **Bedrock Ready**, you can register the model with Amazon Bedrock.

**Important**  
By default Amazon SageMaker JumpStart disables network access for the models that you deploy. If you've enabled network access, you won't be able to use the model with Amazon Bedrock. If you want to use the model with Amazon Bedrock, you must redeploy it with network access disabled.

To use it with Amazon Bedrock, navigate to the **Endpoint details** page and choose **Use with Bedrock** in the upper right corner of the Studio UI. After you see the pop-up, choose **Register to Bedrock**.

# Use foundation models in Amazon SageMaker Studio Classic
<a name="jumpstart-foundation-models-use-studio"></a>

You can fine-tune and deploy both publicly available and proprietary JumpStart foundation models through the Studio Classic UI.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

To get started with Studio Classic, see [Launch Amazon SageMaker Studio Classic](studio-launch.md).

 ![\[JumpStart foundation models available in Amazon SageMaker Studio Classic.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-fm-studio.png) 

After opening Amazon SageMaker Studio Classic, choose **Models, notebooks, solutions** in the SageMaker JumpStart section of the navigation pane. Then, scroll down to find either the **Foundation Models: Text Generation** or **Foundation Models: Image Generation** section depending on your use case. 

You can choose **View model** on a suggested foundation model card, or choose **Explore All Models** to see all available foundation models for either text generation or image generation. If you choose to see all available models, you can further filter available models by task, data type, content type, or framework. You can also search for a model name directly in the **Search** bar. If you need guidance on selecting a model, see [Available foundation models](jumpstart-foundation-models-latest.md).

**Important**  
Some foundation models require explicit acceptance of an end-user license agreement (EULA). For more information, see [EULA acceptance in Amazon SageMaker Studio](jumpstart-foundation-models-choose.md#jumpstart-foundation-models-choose-eula-studio).

After you choose **View model** for the foundation model of your choice in Studio Classic, you can deploy the model. For more information, see [Deploy a Model](jumpstart-deploy.md).

You can also choose **Open notebook** in the **Run in notebook** section to run an example notebook for the foundation model directly in Studio Classic.

**Note**  
To deploy a proprietary foundation model in Studio Classic, you must first subscribe to the model in AWS Marketplace. The AWS Marketplace link is provided in the associated example notebook within Studio Classic.

If the model is fine-tunable, you can also fine-tune the model. For more information, see [Fine-Tune a Model](jumpstart-fine-tune.md). For a list of which JumpStart foundation models are fine-tunable, see [Foundation models and hyperparameters for fine-tuning](jumpstart-foundation-models-fine-tuning.md).

# Use foundation models with the SageMaker Python SDK
<a name="jumpstart-foundation-models-use-python-sdk"></a>

All JumpStart foundation models are available for programmatic deployment using the SageMaker Python SDK.

To deploy publicly available foundation models, you can use their model ID. You can find the model IDs for all publicly available foundation models in the [Built-in Algorithms with pre-trained Model Table](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html). Search for the name of a foundation model in the **Search** bar. Use the **Show entries** dropdown or the pagination controls to navigate the available models.

Proprietary models must be deployed using the model package information after subscribing to the model in AWS Marketplace. 

You can find the list of JumpStart available models in [Available foundation models](jumpstart-foundation-models-latest.md).

**Important**  
Some foundation models require explicit acceptance of an end-user license agreement (EULA). For more information, see [EULA acceptance with the SageMaker Python SDK](jumpstart-foundation-models-choose.md#jumpstart-foundation-models-choose-eula-python-sdk).

The following sections show how to fine-tune publicly available foundation models using the `JumpStartEstimator` class, deploy publicly available foundation models using the `JumpStartModel` class, and deploy proprietary foundation models using the`ModelPackage` class.

**Topics**
+ [Fine-tune publicly available foundation models with the `JumpStartEstimator` class](jumpstart-foundation-models-use-python-sdk-estimator-class.md)
+ [Deploy publicly available foundation models with the `JumpStartModel` class](jumpstart-foundation-models-use-python-sdk-model-class.md)
+ [Deploy proprietary foundation models with the `ModelPackage` class](jumpstart-foundation-models-use-python-sdk-proprietary.md)

# Fine-tune publicly available foundation models with the `JumpStartEstimator` class
<a name="jumpstart-foundation-models-use-python-sdk-estimator-class"></a>

**Note**  
For instructions on fine-tuning foundation models in a private curated hub, see [Fine-tune curated hub models](jumpstart-curated-hubs-fine-tune.md).

You can fine-tune a built-in algorithm or pre-trained model in just a few lines of code using the SageMaker Python SDK.

1. First, find the model ID for the model of your choice in the [Built-in Algorithms with pre-trained Model Table](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html). 

1. Using the model ID, define your training job as a JumpStart estimator.

   ```
   from sagemaker.jumpstart.estimator import JumpStartEstimator
   
   model_id = "huggingface-textgeneration1-gpt-j-6b"
   estimator = JumpStartEstimator(model_id=model_id)
   ```

1. Run `estimator.fit()` on your model, pointing to the training data to use for fine-tuning.

   ```
   estimator.fit(
       {"train": training_dataset_s3_path, "validation": validation_dataset_s3_path}
   )
   ```

1. Then, use the `deploy` method to automatically deploy your model for inference. In this example, we use the GPT-J 6B model from Hugging Face.

   ```
   predictor = estimator.deploy()
   ```

1. You can then run inference with the deployed model using the `predict` method.

   ```
   question = "What is Southern California often abbreviated as?"
   response = predictor.predict(question)
   print(response)
   ```

**Note**  
This example uses the foundation model GPT-J 6B, which is suitable for a wide range of text generation use cases including question answering, named entity recognition, summarization, and more. For more information about model use cases, see [Available foundation models](jumpstart-foundation-models-latest.md).

You can optionally specify model versions or instance types when creating your `JumpStartEstimator`. For more information about the `JumpStartEstimator `class and its parameters, see [JumpStartEstimator](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.jumpstart.estimator.JumpStartEstimator).

## Check default instance types
<a name="jumpstart-foundation-models-use-python-sdk-estimator-class-instance-types"></a>

You can optionally include specific model versions or instance types when fine-tuning a pre-trained model using the `JumpStartEstimator` class. All JumpStart models have a default instance type. Retrieve the default training instance type using the following code:

```
from sagemaker import instance_types

instance_type = instance_types.retrieve_default(
    model_id=model_id,
    model_version=model_version,
    scope="training")
print(instance_type)
```

You can see all supported instance types for a given JumpStart model with the `instance_types.retrieve()` method.

## Check default hyperparameters
<a name="jumpstart-foundation-models-use-python-sdk-estimator-class-hyperparameters"></a>

To check the default hyperparameters used for training, you can use the `retrieve_default()` method from the `hyperparameters` class.

```
from sagemaker import hyperparameters

my_hyperparameters = hyperparameters.retrieve_default(model_id=model_id, model_version=model_version)
print(my_hyperparameters)

# Optionally override default hyperparameters for fine-tuning
my_hyperparameters["epoch"] = "3"
my_hyperparameters["per_device_train_batch_size"] = "4"

# Optionally validate hyperparameters for the model
hyperparameters.validate(model_id=model_id, model_version=model_version, hyperparameters=my_hyperparameters)
```

For more information on available hyperparameters, see [Commonly supported fine-tuning hyperparameters](jumpstart-foundation-models-fine-tuning.md#jumpstart-foundation-models-fine-tuning-hyperparameters).

## Check default metric definitions
<a name="jumpstart-foundation-models-use-python-sdk-estimator-class-metric-definitions"></a>

You can also check the default metric definitions:

```
print(metric_definitions.retrieve_default(model_id=model_id, model_version=model_version))
```

# Deploy publicly available foundation models with the `JumpStartModel` class
<a name="jumpstart-foundation-models-use-python-sdk-model-class"></a>

You can deploy a built-in algorithm or pre-trained model to a SageMaker AI endpoint in just a few lines of code using the SageMaker Python SDK.

1. First, find the model ID for the model of your choice in the [Built-in Algorithms with pre-trained Model Table](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html).

1. Using the model ID, define your model as a JumpStart model.

   ```
   from sagemaker.jumpstart.model import JumpStartModel
   
   model_id = "huggingface-text2text-flan-t5-xl"
   my_model = JumpStartModel(model_id=model_id)
   ```

1. Use the `deploy` method to automatically deploy your model for inference. In this example, we use the FLAN-T5 XL model from Hugging Face.

   ```
   predictor = my_model.deploy()
   ```

1. You can then run inference with the deployed model using the `predict` method.

   ```
   question = "What is Southern California often abbreviated as?"
   response = predictor.predict(question)
   print(response)
   ```

**Note**  
This example uses the foundation model FLAN-T5 XL, which is suitable for a wide range of text generation use cases including question answering, summarization, chatbot creation, and more. For more information about model use cases, see [Available foundation models](jumpstart-foundation-models-latest.md).

For more information about the `JumpStartModel `class and its parameters, see [JumpStartModel](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.jumpstart.model.JumpStartModel).

## Check default instance types
<a name="jumpstart-foundation-models-use-python-sdk-model-class-instance-types"></a>

You can optionally include specific model versions or instance types when deploying a pre-trained model using the `JumpStartModel` class. All JumpStart models have a default instance type. Retrieve the default deployment instance type using the following code:

```
from sagemaker import instance_types

instance_type = instance_types.retrieve_default(
    model_id=model_id,
    model_version=model_version,
    scope="inference")
print(instance_type)
```

See all supported instance types for a given JumpStart model with the `instance_types.retrieve()` method.

## Use inference components to deploy multiple models to a shared endpoint
<a name="jumpstart-foundation-models-use-python-sdk-model-class-endpoint-types"></a>

An inference component is a SageMaker AI hosting object that you can use to deploy one or more models to an endpoint for increased flexibility and scalability. You must change the `endpoint_type` for your JumpStart model to be inference-component-based rather than the default model-based endpoint. 

```
predictor = my_model.deploy(
    endpoint_name = 'jumpstart-model-id-123456789012', 
    endpoint_type = EndpointType.INFERENCE_COMPONENT_BASED
)
```

For more information on creating endpoints with inference components and deploying SageMaker AI models, see [Shared resource utilization with multiple models](realtime-endpoints-deploy-models.md#deployed-shared-utilization).

## Check valid input and output inference formats
<a name="jumpstart-foundation-models-use-python-sdk-model-class-input-output"></a>

To check valid data input and output formats for inference, you can use the `retrieve_options()` method from the `Serializers` and `Deserializers` classes.

```
print(sagemaker.serializers.retrieve_options(model_id=model_id, model_version=model_version))
print(sagemaker.deserializers.retrieve_options(model_id=model_id, model_version=model_version))
```

## Check supported content and accept types
<a name="jumpstart-foundation-models-use-python-sdk-model-class-content-types"></a>

Similarly, you can use the `retrieve_options()` method to check the supported content and accept types for a model.

```
print(sagemaker.content_types.retrieve_options(model_id=model_id, model_version=model_version))
print(sagemaker.accept_types.retrieve_options(model_id=model_id, model_version=model_version))
```

For more information about utilities, see [Utility APIs](https://sagemaker.readthedocs.io/en/stable/api/utility/index.html).

# Deploy proprietary foundation models with the `ModelPackage` class
<a name="jumpstart-foundation-models-use-python-sdk-proprietary"></a>

Proprietary models must be deployed using the model package information after subscribing to the model in AWS Marketplace. For more information about SageMaker AI and AWS Marketplace, see [Buy and Sell Amazon SageMaker AI Algorithms and Models in AWS Marketplace](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-marketplace.html). To find AWS Marketplace links for the latest proprietary models, see [Getting started with Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart/getting-started/?sagemaker-jumpstart-cards.sort-by=item.additionalFields.priority&sagemaker-jumpstart-cards.sort-order=asc&awsf.sagemaker-jumpstart-filter-product-type=product-type%23foundation-model&awsf.sagemaker-jumpstart-filter-text=*all&awsf.sagemaker-jumpstart-filter-vision=*all&awsf.sagemaker-jumpstart-filter-tabular=*all&awsf.sagemaker-jumpstart-filter-audio-tasks=*all&awsf.sagemaker-jumpstart-filter-multimodal=*all&awsf.sagemaker-jumpstart-filter-RL=*all&sagemaker-jumpstart-cards.q=proprietary&sagemaker-jumpstart-cards.q_operator=AND).

After subscribing to the model of your choice in AWS Marketplace, you can deploy the foundation model using the SageMaker Python SDK and the SDK associated with the model provider. For example, AI21 Labs, Cohere, and LightOn use the `"ai21[SM]"`, `cohere-sagemaker`, and `lightonsage` packages, respectively.

For example, to define a JumpStart model using Jurassic-2 Jumbo Instruct from AI21 Labs, use the following code: 

```
import sagemaker
import ai21

role = get_execution_role()
sagemaker_session = sagemaker.Session()
model_package_arn = "arn:aws:sagemaker:us-east-1:865070037744:model-package/j2-jumbo-instruct-v1-1-43-4e47c49e61743066b9d95efed6882f35"

my_model = ModelPackage(
    role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session
)
```

For step-by-step examples, find and run the notebook associated with the proprietary foundation model of your choice in SageMaker Studio Classic. See [Use foundation models in Amazon SageMaker Studio Classic](jumpstart-foundation-models-use-studio.md) for more information. For more information on the SageMaker Python SDK, see [https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.model.ModelPackage](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.model.ModelPackage).

# Discover foundation models in the SageMaker AI Console
<a name="jumpstart-foundation-models-use-console"></a>

You can explore JumpStart foundation models directly through the Amazon SageMaker AI Console.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Find **JumpStart** on the left navigation panel and choose **Foundation models**.

1. Browse models or search for a specific model. If you need guidance for model selection, see [Available foundation models](jumpstart-foundation-models-latest.md). Choose **View model** to view the model detail page for the foundation model of your choice.

1. If the model is a proprietary model, choose **Subscribe** in the upper right corner of the model detail page to subscribe to the model in AWS Marketplace. You should receive an email confirming your subscription to the model of your choice. For more information about SageMaker AI and AWS Marketplace, see [Buy and Sell Amazon SageMaker AI Algorithms and Models in AWS Marketplace](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-marketplace.html). Publicly available foundation models do not require a subscription.

1. To view an example notebook in GitHub, choose **View code** in the upper right corner of the model detail page.

1. To view and run an example notebook directly in Amazon SageMaker Studio Classic, choose **Open notebook in Studio** in the upper right corner of the model detail page.

# Model sources and license agreements
<a name="jumpstart-foundation-models-choose"></a>

Amazon SageMaker JumpStart provides access to hundreds of publicly available and proprietary foundation models from third-party sources and partners. You can explore the JumpStart foundation model selection directly in the SageMaker AI console, Studio, or Studio Classic. 

## Licenses and model sources
<a name="jumpstart-foundation-models-choose-source"></a>

Amazon SageMaker JumpStart provides access to both publicly available and proprietary foundation models. Foundation models are onboarded and maintained from third-party open source and proprietary providers. As such, they are released under different licenses as designated by the model source. Be sure to review the license for any foundation model that you use. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using the content. Some examples of common foundation model licenses include:
+ Alexa Teacher Model
+ Apache 2.0
+ BigScience Responsible AI License v1.0
+ CreativeML Open RAIL\$1\$1-M license

Similarly, for any proprietary foundation models, be sure to review and comply with any terms of use and usage guidelines from the model provider. If you have questions about license information for a specific proprietary model, reach out to model provider directly. You can find model provider contact information in the **Support** tab of each model page in AWS Marketplace.

## End-user license agreements
<a name="jumpstart-foundation-models-choose-eula"></a>

Some JumpStart foundation models require explicit acceptance of an end-user license agreement (EULA) before use. 

### EULA acceptance in Amazon SageMaker Studio
<a name="jumpstart-foundation-models-choose-eula-studio"></a>

You may be prompted to accept an end-user license agreement before fine-tuning, deploying, or evaluating a JumpStart foundation model in Studio. To get started with JumpStart foundation models in Studio, see [Use foundation models in Studio](jumpstart-foundation-models-use-studio-updated.md). 

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

Some JumpStart foundation models require acceptance of an end-user license agreement before deployment. If this applies to the foundation model that you choose to use, Studio prompts you with a window containing the EULA content. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.

#### EULA acceptance in Amazon SageMaker Studio Classic
<a name="jumpstart-foundation-models-choose-eula-studio-classic"></a>

You may be prompted to accept an end-user license agreement before deploying a JumpStart foundation model or opening a JumpStart foundation model notebook in Studio Classic. To get started with JumpStart foundation models in Studio Classic, see [Use foundation models in Amazon SageMaker Studio Classic](jumpstart-foundation-models-use-studio.md).

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Some JumpStart foundation models require acceptance of an end-user license agreement before deployment. If this applies to the foundation model that you choose to use, Studio Classic prompts you with a window titled **Review the End User License Agreement (EULA) and Acceptable Use Policy (AUP) below** after you choose either **Deploy** or **Open notebook**. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.

### EULA acceptance with the SageMaker Python SDK
<a name="jumpstart-foundation-models-choose-eula-python-sdk"></a>

The following sections show you how to explicitly declare EULA acceptance when deploying or fine-tuning a JumpStart model with the SageMaker Python SDK. For more information on getting started with JumpStart foundation models using the SageMaker Python SDK, see [Use foundation models with the SageMaker Python SDK](jumpstart-foundation-models-use-python-sdk.md).

Before you begin, make sure that you do the following:
+ Upgrade to the latest version of the model that you use. 
+ Install the latest version of the SageMaker Python SDK.

**Important**  
To use the following workflow you must have [v2.198.0](https://github.com/aws/sagemaker-python-sdk/releases/tag/v2.198.0) or later of the SageMaker Python SDK installed.

#### EULA acceptance when deploying a JumpStart model
<a name="jumpstart-foundation-models-choose-eula-python-sdk-deploy"></a>

For models that require the acceptance of an end-user license agreement, you must explicitly declare EULA acceptance when deploying your JumpStart model.

```
from sagemaker.jumpstart.model import JumpStartModel
model_id = "meta-textgeneration-llama-2-13b"
my_model = JumpStartModel(model_id=model_id)

# Declare EULA acceptance when deploying your JumpStart model
predictor = my_model.deploy(accept_eula=True)
```

The `accept_eula` value is `None` by default and must be explicitly redefined as `True` in order to accept the end-user license agreement. For more information, see [JumpStartModel](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.jumpstart.model.JumpStartModel).

#### EULA acceptance when fine-tuning a JumpStart model
<a name="jumpstart-foundation-models-choose-eula-python-sdk-fine-tune"></a>

For fine-tuning models that require the acceptance of an end-user license agreement, you must explicitly declare EULA acceptance when running the `fit()` method for your JumpStart estimator. After fine-tuning a pre-trained model, the weights of the original model are changed. Therefore, when you deploy the fine-tuned model later, you do not need to accept a EULA.

**Note**  
The following example sets `accept_eula=False`. You should manually change the value to `True` in order to accept the EULA.

```
from sagemaker.jumpstart.estimator import JumpStartEstimator
model_id = "meta-textgeneration-llama-2-13b"

# Declare EULA acceptance when defining your JumpStart estimator
estimator = JumpStartEstimator(model_id=model_id)
estimator.fit(accept_eula=False,
{"train": training_dataset_s3_path, "validation": validation_dataset_s3_path}
)
```

The `accept_eula` value is `None` by default and must be explicitly redefined as `"true"` within the `fit()` method in order to accept the end-user license agreement. For more information, see [JumpStartEstimator](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html#sagemaker.jumpstart.estimator.JumpStartEstimator).

#### EULA acceptance SageMaker Python SDK versions earlier than 2.198.0
<a name="jumpstart-foundation-models-choose-eula-python-sdk-previous-version"></a>

**Important**  
When using versions earlier than [2.198.0](https://github.com/aws/sagemaker-python-sdk/releases/tag/v2.198.0) of the SageMaker Python SDK, you must use the SageMaker `Predictor` class to accept a model EULA. 

After deploying a JumpStart foundation model programmatically using the SageMaker Python SDK, you can run inference against your deployed endpoint with the SageMaker `[Predictor](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html)` class. For models that require the acceptance of an end-user license agreement, you must explicitly declare EULA acceptance in your call to the `Predictor` class: 

```
predictor.predict(payload, custom_attributes="accept_eula=true")
```

The `accept_eula` value is `false` by default and must be explicitly redefined as `true` in order to accept the end-user license agreement. The predictor returns an error if you try to run inference while `accept_eula` is set to `false`. For more information on getting started with JumpStart foundation models using the SageMaker Python SDK, see [Use foundation models with the SageMaker Python SDK](jumpstart-foundation-models-use-python-sdk.md).

**Important**  
The `custom_attributes` parameter accepts key-value pairs in the format `"key1=value1;key2=value2"`. If you use the same key multiple times, the inference server uses the last value associated with the key. For example, if you pass `"accept_eula=false;accept_eula=true"` to the `custom_attributes` parameter, then the inference server associates the value `true` with the `accept_eula` key.

# Foundation model customization
<a name="jumpstart-foundation-models-customize"></a>

Foundation models are extremely powerful models able to solve a wide array of tasks. To solve most tasks effectively, these models require some form of customization.

The recommended way to first customize a foundation model to a specific use case is through prompt engineering. Providing your foundation model with well-engineered, context-rich prompts can help achieve desired results without any fine-tuning or changing of model weights. For more information, see [Prompt engineering for foundation models](jumpstart-foundation-models-customize-prompt-engineering.md).

If prompt engineering alone is not enough to customize your foundation model to a specific task, you can fine-tune a foundation model on additional domain-specific data. For more information, see [Foundation models and hyperparameters for fine-tuning](jumpstart-foundation-models-fine-tuning.md). The fine-tuning process involves changing model weights.

If you want to customize your model with information from a knowledge library without any retraining, see [Retrieval Augmented Generation](jumpstart-foundation-models-customize-rag.md).

# Prompt engineering for foundation models
<a name="jumpstart-foundation-models-customize-prompt-engineering"></a>

Prompt engineering is the process of designing and refining the prompts or input stimuli for a language model to generate specific types of output. Prompt engineering involves selecting appropriate keywords, providing context, and shaping the input in a way that encourages the model to produce the desired response and is a vital technique to actively shape the behavior and output of foundation models.

Effective prompt engineering is crucial for directing model behavior and achieving desired responses. Through prompt engineering, you can control a model’s tone, style, and domain expertise without more involved customization measures like fine-tuning. We recommend dedicating time to prompt engineering before you consider fine-tuning a model on additional data. The goal is to provide sufficient context and guidance to the model so that it can generalize and perform well on unseen or limited data scenarios.

## Zero-shot learning
<a name="jumpstart-foundation-models-customize-prompt-engineering-zero-shot"></a>

Zero-shot learning involves training a model to generalize and make predictions on unseen classes or tasks. To perform prompt engineering in zero-shot learning environments, we recommend constructing prompts that explicitly provide information about the target task and the desired output format. For example, if you want to use a foundation model for zero-shot text classification on a set of classes that the model did not see during training, a well-engineered prompt could be: `"Classify the following text as either sports, politics, or entertainment: [input text]."` By explicitly specifying the target classes and the expected output format, you can guide the model to make accurate predictions even on unseen classes.

## Few-shot learning
<a name="jumpstart-foundation-models-customize-prompt-engineering-few-shot"></a>

Few-shot learning involves training a model with a limited amount of data for new classes or tasks. Prompt engineering in few-shot learning environments focuses on designing prompts that effectively use the limited available training data. For example, if you use a foundation model for an image classification task and only have a few examples of a new image class, you can engineer a prompt that includes the available labeled examples with a placeholder for the target class. For example, the prompt could be: `"[image 1], [image 2], and [image 3] are examples of [target class]. Classify the following image as [target class]"`. By incorporating the limited labeled examples and explicitly specifying the target class, you can guide the model to generalize and make accurate predictions even with minimal training data.

## Supported inference parameters
<a name="jumpstart-foundation-models-customize-prompt-engineering-inference-params"></a>

Changing inference parameters might also affect the responses to your prompts. While you can try to add as much specificity and context as possible to your prompts, you can also experiment with supported inference parameters. The following are examples of some commonly supported inference parameters:


| Inference Parameter | Description | 
| --- | --- | 
| `max_new_tokens` | The maximum output length of a foundation model response. Valid values: integer, range: Positive integer. | 
| `temperature` | Controls the randomness in the output. Higher temperature results in an output sequence with low-probability words and lower temperature results in output sequence with high-probability words. If `temperature=0`, the response is made up of only the highest probability words (greedy decoding). Valid values: float, range: Positive float. | 
| `top_p` | In each step of text generation, the model samples from the smallest possible set of words with a cumulative probability of `top_p`. Valid values: float, range: 0.0, 1.0. | 
| `return_full_text` | If `True`, then the input text is part of the generated output text. Valid values: boolean, default: False. | 

For more information on foundation model inference, see [Deploy publicly available foundation models with the `JumpStartModel` class](jumpstart-foundation-models-use-python-sdk-model-class.md).

If prompt engineering is not sufficient to adapt your foundation model to specific business needs, domain-specific language, target tasks, or other requirements, you can consider fine-tuning your model on additional data or using Retrieval Augmented Generation (RAG) to augment your model architecture with enhanced context from archived knowledge sources. For more information, see [Foundation models and hyperparameters for fine-tuning](jumpstart-foundation-models-fine-tuning.md) or [Retrieval Augmented Generation](jumpstart-foundation-models-customize-rag.md).

# Foundation models and hyperparameters for fine-tuning
<a name="jumpstart-foundation-models-fine-tuning"></a>

Foundation models are computationally expensive and trained on a large, unlabeled corpus. Fine-tuning a pre-trained foundation model is an affordable way to take advantage of their broad capabilities while customizing a model on your own small, corpus. Fine-tuning is a customization method that involved further training and does change the weights of your model. 

Fine-tuning might be useful to you if you need: 
+ to customize your model to specific business needs
+ your model to successfully work with domain-specific language, such as industry jargon, technical terms, or other specialized vocabulary
+ enhanced performance for specific tasks
+ accurate, relative, and context-aware responses in applications
+ responses that are more factual, less toxic, and better-aligned to specific requirements

There are two main approaches that you can take for fine-tuning depending on your use case and chosen foundation model.

1. If you're interested in fine-tuning your model on domain-specific data, see [Fine-tune a large language model (LLM) using domain adaptation](jumpstart-foundation-models-fine-tuning-domain-adaptation.md).

1. If you're interested in instruction-based fine-tuning using prompt and response examples, see [Fine-tune a large language model (LLM) using prompt instructions](jumpstart-foundation-models-fine-tuning-instruction-based.md).

## Foundation models available for fine-tuning
<a name="jumpstart-foundation-models-fine-tuning-models"></a>

You can fine-tune any of the following JumpStart foundation models:
+ Bloom 3B
+ Bloom 7B1
+ BloomZ 3B FP16
+ BloomZ 7B1 FP16
+ Code Llama 13B
+ Code Llama 13B Python
+ Code Llama 34B
+ Code Llama 34B Python
+ Code Llama 70B
+ Code Llama 70B Python
+ Code Llama 7B
+ Code Llama 7B Python
+ CyberAgentLM2-7B-Chat (CALM2-7B-Chat)
+ Falcon 40B BF16
+ Falcon 40B Instruct BF16
+ Falcon 7B BF16
+ Falcon 7B Instruct BF16
+ Flan-T5 Base
+ Flan-T5 Large
+ Flan-T5 Small
+ Flan-T5 XL
+ Flan-T5 XXL
+ Gemma 2B
+ Gemma 2B Instruct
+ Gemma 7B
+ Gemma 7B Instruct
+ GPT-2 XL
+ GPT-J 6B
+ GPT-Neo 1.3B
+ GPT-Neo 125M
+ GPT-NEO 2.7B
+ LightGPT Instruct 6B
+ Llama 2 13B
+ Llama 2 13B Chat
+ Llama 2 13B Neuron
+ Llama 2 70B
+ Llama 2 70B Chat
+ Llama 2 7B
+ Llama 2 7B Chat
+ Llama 2 7B Neuron
+ Mistral 7B
+ Mixtral 8x7B
+ Mixtral 8x7B Instruct
+ RedPajama INCITE Base 3B V1
+ RedPajama INCITE Base 7B V1
+ RedPajama INCITE Chat 3B V1
+ RedPajama INCITE Chat 7B V1
+ RedPajama INCITE Instruct 3B V1
+ RedPajama INCITE Instruct 7B V1
+ Stable Diffusion 2.1

## Commonly supported fine-tuning hyperparameters
<a name="jumpstart-foundation-models-fine-tuning-hyperparameters"></a>

Different foundation models support different hyperparameters when fine-tuning. The following are commonly-supported hyperparameters that can further customize your model during training:


| Inference Parameter | Description | 
| --- | --- | 
| `epoch` | The number of passes that the model takes through the fine-tuning dataset during training. Must be an integer greater than 1.  | 
| `learning_rate` |  The rate at which the model weights are updated after working through each batch of fine-tuning training examples. Must be a positive float greater than 0.  | 
| `instruction_tuned` |  Whether to instruction-train the model or not. Must be `'True'` or `'False'`.  | 
| `per_device_train_batch_size` |  The batch size per GPU core or CPU for training. Must be a positive integer. | 
| `per_device_eval_batch_size` |  The batch size per GPU core or CPU for evaluation. Must be a positive integer.  | 
| `max_train_samples` |  For debugging purposes or quicker training, truncate the number of training examples to this value. Value -1 means that the model uses all of the training samples. Must be a positive integer or -1.  | 
| `max_val_samples` |  For debugging purposes or quicker training, truncate the number of validation examples to this value. Value -1 means that the model uses all of the validation samples. Must be a positive integer or -1.  | 
| `max_input_length` |  Maximum total input sequence length after tokenization. Sequences longer than this will be truncated. If -1, `max_input_length` is set to the minimum of 1024 and the `model_max_length` defined by the tokenizer. If set to a positive value, `max_input_length` is set to the minimum of the provided value and the `model_max_length` defined by the tokenizer. Must be a positive integer or -1.  | 
| `validation_split_ratio` |  If there is no validation channel, ratio of train-validation split from the training data. Must be between 0 and 1.  | 
| `train_data_split_seed` |  If validation data is not present, this fixes the random splitting of the input training data to training and validation data used by the model. Must be an integer.  | 
| `preprocessing_num_workers` |  The number of processes to use for the pre-processing. If `None`, main process is used for pre-processing.  | 
| `lora_r` |  Low-rank adaptation (LoRA) r value, which acts as the scaling factor for weight updates. Must be a positive integer.  | 
| `lora_alpha` |  Low-rank adaptation (LoRA) alpha value, which acts as the scaling factor for weight updates. Generally 2 to 4 times the size of `lora_r`. Must be a positive integer.  | 
| `lora_dropout` |  Dropout value for low-rank adaptation (LoRA) layers Must be a positive float between 0 and 1.  | 
| `int8_quantization` |  If `True`, model is loaded with 8 bit precision for training.  | 
| `enable_fsdp` |  If `True`, training uses Fully Sharded Data Parallelism.  | 

You can specify hyperparameter values when you fine-tune your model in Studio. For more information, see [Fine-tune a model in Studio](jumpstart-foundation-models-use-studio-updated-fine-tune.md). 

You can also override default hyperparameter values when fine-tuning your model using the SageMaker Python SDK. For more information, see [Fine-tune publicly available foundation models with the `JumpStartEstimator` class](jumpstart-foundation-models-use-python-sdk-estimator-class.md).

# Fine-tune a large language model (LLM) using domain adaptation
<a name="jumpstart-foundation-models-fine-tuning-domain-adaptation"></a>

Domain adaptation fine-tuning allows you to leverage pre-trained foundation models and adapt them to specific tasks using limited domain-specific data. If prompt engineering efforts do not provide enough customization, you can use domain adaption fine-tuning to get your model working with domain-specific language, such as industry jargon, technical terms, or other specialized data. This fine-tuning process modifies the weights of the model. 

To fine-tune your model on a domain-specific dataset:

1. Prepare your training data. For instructions, see [Prepare and upload training data for domain adaptation fine-tuning](#jumpstart-foundation-models-fine-tuning-domain-adaptation-prepare-data).

1. Create your fine-tuning training job. For instructions, see [Create a training job for instruction-based fine-tuning](#jumpstart-foundation-models-fine-tuning-domain-adaptation-train).

You can find end-to-end examples in [Example notebooks](#jumpstart-foundation-models-fine-tuning-domain-adaptation-examples).

Domain adaptation fine-tuning is available with the following foundation models:

**Note**  
Some JumpStart foundation models, such as Llama 2 7B, require acceptance of an end-user license agreement before fine-tuning and performing inference. For more information, see [End-user license agreements](jumpstart-foundation-models-choose.md#jumpstart-foundation-models-choose-eula).
+ Bloom 3B
+ Bloom 7B1
+ BloomZ 3B FP16
+ BloomZ 7B1 FP16
+ GPT-2 XL
+ GPT-J 6B
+ GPT-Neo 1.3B
+ GPT-Neo 125M
+ GPT-NEO 2.7B
+ Llama 2 13B
+ Llama 2 13B Chat
+ Llama 2 13B Neuron
+ Llama 2 70B
+ Llama 2 70B Chat
+ Llama 2 7B
+ Llama 2 7B Chat
+ Llama 2 7B Neuron

## Prepare and upload training data for domain adaptation fine-tuning
<a name="jumpstart-foundation-models-fine-tuning-domain-adaptation-prepare-data"></a>

Training data for domain adaptation fine-tuning can be provided in CSV, JSON, or TXT file format. All training data must be in a single file within a single folder.

The training data is taken from the **Text** column for CSV or JSON training data files. If no column is labeled **Text**, then the training data is taken from the first column for CSV or JSON training data files.

The following is an example body of a TXT file to be used for fine-tuning:

```
This report includes estimates, projections, statements relating to our
business plans, objectives, and expected operating results that are “forward-
looking statements” within the meaning of the Private Securities Litigation
Reform Act of 1995, Section 27A of the Securities Act of 1933, and Section 21E
of ....
```

### Split data for training and testing
<a name="jumpstart-foundation-models-fine-tuning-domain-adaptation-split-data"></a>

You can optionally provide another folder containing validation data. This folder should also include one CSV, JSON, or TXT file. If no validation dataset is provided, then a set amount of the training data is set aside for validation purposes. You can adjust the percentage of training data used for validation when you choose the hyperparameters for fine-tuning your model. 

### Upload fine-tuning data to Amazon S3
<a name="jumpstart-foundation-models-fine-tuning-domain-adaptation-upload-data"></a>

Upload your prepared data to Amazon Simple Storage Service (Amazon S3) to use when fine-tuning a JumpStart foundation model. You can use the following commands to upload your data:

```
from sagemaker.s3 import S3Uploader
import sagemaker
import random

output_bucket = sagemaker.Session().default_bucket()
local_data_file = "train.txt"
train_data_location = f"s3://{output_bucket}/training_folder"
S3Uploader.upload(local_data_file, train_data_location)
S3Uploader.upload("template.json", train_data_location)
print(f"Training data: {train_data_location}")
```

## Create a training job for instruction-based fine-tuning
<a name="jumpstart-foundation-models-fine-tuning-domain-adaptation-train"></a>

After your data is uploaded to Amazon S3, you can fine-tune and deploy your JumpStart foundation model. To fine-tune your model in Studio, see [Fine-tune a model in Studio](jumpstart-foundation-models-use-studio-updated-fine-tune.md). To fine-tune your model using the SageMaker Python SDK, see [Fine-tune publicly available foundation models with the `JumpStartEstimator` class](jumpstart-foundation-models-use-python-sdk-estimator-class.md).

## Example notebooks
<a name="jumpstart-foundation-models-fine-tuning-domain-adaptation-examples"></a>

For more information on domain adaptation fine-tuning, see the following example notebooks:
+ [SageMaker JumpStart Foundation Models - Fine-tuning text generation GPT-J 6B model on domain specific dataset](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/domain-adaption-finetuning-gpt-j-6b.html)
+ [Fine-tune LLaMA 2 models on JumpStart](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/llama-2-finetuning.html)

# Fine-tune a large language model (LLM) using prompt instructions
<a name="jumpstart-foundation-models-fine-tuning-instruction-based"></a>

Instruction-based fine-tuning uses labeled examples to improve the performance of a pre-trained foundation model on a specific task. The labeled examples are formatted as prompt, response pairs and phrased as instructions. This fine-tuning process modifies the weights of the model. For more information on instruction-based fine-tuning, see the papers [Introducing FLAN: More generalizable Language Models with Instruction Fine-Tuning](https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html) and [Scaling Instruction-Finetuned Language Models](https://arxiv.org/abs/2210.11416).

Fine-tuned LAnguage Net (FLAN) models use instruction tuning to make models more amenable to solving general downstream NLP tasks. Amazon SageMaker JumpStart provides a number of foundation models in the FLAN model family. For example, FLAN-T5 models are instruction fine-tuned on a wide range of tasks to increase zero-shot performance for a variety of common use cases. With additional data and fine-tuning, instruction-based models can be further adapted to more specific tasks that weren’t considered during pre-training. 

To fine-tune a LLM on a specific task using prompt-response pairs task instructions:

1. Prepare your instructions in JSON files. For more information about the required format for the prompt-response pair files and the structure of the data folder, see [Prepare and upload training data for instruction-based fine-tuning](#jumpstart-foundation-models-fine-tuning-instruction-based-prepare-data).

1. Create your fine-tuning training job. For instructions, see [Create a training job for instruction-based fine-tuning](#jumpstart-foundation-models-fine-tuning-instruction-based-train).

You can find end-to-end examples in [Example notebooks](#jumpstart-foundation-models-fine-tuning-instruction-based-examples).

Only a subset of JumpStart foundation models are compatible with instruction-based fine-tuning. Instruction-based fine-tuning is available with the following foundation models: 

**Note**  
Some JumpStart foundation models, such as Llama 2 7B, require acceptance of an end-user license agreement before fine-tuning and performing inference. For more information, see [End-user license agreements](jumpstart-foundation-models-choose.md#jumpstart-foundation-models-choose-eula).
+ Flan-T5 Base
+ Flan-T5 Large
+ Flan-T5 Small
+ Flan-T5 XL
+ Flan-T5 XXL
+ Llama 2 13B
+ Llama 2 13B Chat
+ Llama 2 13B Neuron
+ Llama 2 70B
+ Llama 2 70B Chat
+ Llama 2 7B
+ Llama 2 7B Chat
+ Llama 2 7B Neuron
+ Mistral 7B
+ RedPajama INCITE Base 3B V1
+ RedPajama INCITE Base 7B V1
+ RedPajama INCITE Chat 3B V1
+ RedPajama INCITE Chat 7B V1
+ RedPajama INCITE Instruct 3B V1
+ RedPajama INCITE Instruct 7B V1

## Prepare and upload training data for instruction-based fine-tuning
<a name="jumpstart-foundation-models-fine-tuning-instruction-based-prepare-data"></a>

Training data for instruction-based fine-tuning must be provided in JSON Lines text file format, where each line is a dictionary. All training data must be in a single folder. The folder can include multiple .jsonl files. 

The training folder can also include a template JSON file (`template.json`) that describes the input and output formats of your data. If no template file is provided, the following template file is used: 

```
{
  "prompt": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Input:\n{context}",
  "completion": "{response}"
}
```

According to the `template.json` file, each .jsonl entry of the training data must include `{instruction}`, `{context}`, and `{response}` fields. 

If you provide a custom template JSON file, use the `"prompt"` and `"completion"` keys to define your own required fields. According to the following custom template JSON file, each .jsonl entry of the training data must include `{question}`, `{context}`, and `{answer}` fields:

```
{
  "prompt": "question: {question} context: {context}",
  "completion": "{answer}"
}
```

### Split data for training and testing
<a name="jumpstart-foundation-models-fine-tuning-instruction-based-split-data"></a>

You can optionally provide another folder containing validation data. This folder should also include one or more .jsonl files. If no validation dataset is provided, then a set amount of the training data is set aside for validation purposes. You can adjust the percentage of training data used for validation when you choose the hyperparameters for fine-tuning your model. 

### Upload fine-tuning data to Amazon S3
<a name="jumpstart-foundation-models-fine-tuning-instruction-based-upload-data"></a>

Upload your prepared data to Amazon Simple Storage Service (Amazon S3) to use when fine-tuning a JumpStart foundation model. You can use the following commands to upload your data:

```
from sagemaker.s3 import S3Uploader
import sagemaker
import random

output_bucket = sagemaker.Session().default_bucket()
local_data_file = "train.jsonl"
train_data_location = f"s3://{output_bucket}/dolly_dataset"
S3Uploader.upload(local_data_file, train_data_location)
S3Uploader.upload("template.json", train_data_location)
print(f"Training data: {train_data_location}")
```

## Create a training job for instruction-based fine-tuning
<a name="jumpstart-foundation-models-fine-tuning-instruction-based-train"></a>

After your data is uploaded to Amazon S3, you can fine-tune and deploy your JumpStart foundation model. To fine-tune your model in Studio, see [Fine-tune a model in Studio](jumpstart-foundation-models-use-studio-updated-fine-tune.md). To fine-tune your model using the SageMaker Python SDK, see [Fine-tune publicly available foundation models with the `JumpStartEstimator` class](jumpstart-foundation-models-use-python-sdk-estimator-class.md).

## Example notebooks
<a name="jumpstart-foundation-models-fine-tuning-instruction-based-examples"></a>

For more information on instruction-based fine-tuning, see the following example notebooks:
+ [Fine-tune LLaMA 2 models on JumpStart](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/llama-2-finetuning.html)
+ [Introduction to SageMaker JumpStart - Text Generation with Mistral models](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/mistral-7b-instruction-domain-adaptation-finetuning.html)
+ [Introduction to SageMaker JumpStart - Text Generation with Falcon models](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/falcon-7b-instruction-domain-adaptation-finetuning.html)
+ [SageMaker JumpStart Foundation Models - HuggingFace Text2Text Instruction Fine-Tuning](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/instruction-fine-tuning-flan-t5.html)

# Retrieval Augmented Generation
<a name="jumpstart-foundation-models-customize-rag"></a>

Foundation models are usually trained offline, making the model agnostic to any data that is created after the model was trained. Additionally, foundation models are trained on very general domain corpora, making them less effective for domain-specific tasks. You can use Retrieval Augmented Generation (RAG) to retrieve data from outside a foundation model and augment your prompts by adding the relevant retrieved data in context. For more information about RAG model architectures, see [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401).

With RAG, the external data used to augment your prompts can come from multiple data sources, such as a document repositories, databases, or APIs. The first step is to convert your documents and any user queries into a compatible format to perform relevancy search. To make the formats compatible, a document collection, or knowledge library, and user-submitted queries are converted to numerical representations using embedding language models. *Embedding* is the process by which text is given numerical representation in a vector space. RAG model architectures compare the embeddings of user queries within the vector of the knowledge library. The original user prompt is then appended with relevant context from similar documents within the knowledge library. This augmented prompt is then sent to the foundation model. You can update knowledge libraries and their relevant embeddings asynchronously.

 ![\[A model architecture diagram of Retrieval Augmented Generation (RAG).\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-fm-rag.jpg) 

The retrieved document should be large enough to contain useful context to help augment the prompt, but small enough to fit into the maximum sequence length of the prompt. You can use task-specific JumpStart models, such as the General Text Embeddings (GTE) model from Hugging Face, to provide the embeddings for your prompts and knowledge library documents. After comparing the prompt and document embeddings to find the most relevant documents, construct a new prompt with the supplemental context. Then, pass the augmented prompt to a text generation model of your choosing. 

## Example notebooks
<a name="jumpstart-foundation-models-customize-rag-examples"></a>

For more information on RAG foundation model solutions, see the following example notebooks: 
+ [Retrieval-Augmented Generation: Question Answering using LangChain and Cohere’s Generate and Embedding Models from SageMaker JumpStart](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_Cohere+langchain_jumpstart.html)
+ [Retrieval-Augmented Generation: Question Answering using LLama-2, Pinecone and Custom Dataset](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_pinecone_llama-2_jumpstart.html)
+ [Retrieval-Augmented Generation: Question Answering based on Custom Dataset with Open-sourced LangChain Library](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_langchain_jumpstart.html)
+ [Retrieval-Augmented Generation: Question Answering based on Custom Dataset](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_jumpstart_knn.html)
+ [Retrieval-Augmented Generation: Question Answering using Llama-2 and Text Embedding Models](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_text_embedding_llama-2_jumpstart.html)
+ [Amazon SageMaker JumpStart - Text Embedding and Sentence Similarity](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/text-embedding-sentence-similarity.html)

You can clone the [Amazon SageMaker AI examples repository](https://github.com/aws/amazon-sagemaker-examples/tree/main/introduction_to_amazon_algorithms/jumpstart-foundation-models) to run the available JumpStart foundation model examples in the Jupyter environment of your choice within Studio. For more information on applications that you can use to create and access Jupyter in SageMaker AI, see [Applications supported in Amazon SageMaker Studio](studio-updated-apps.md).

# Evaluate a text generation foundation model in Studio
<a name="jumpstart-foundation-models-evaluate"></a>

**Note**  
Foundation Model Evaluations (FMEval) is in preview release for Amazon SageMaker Clarify and is subject to change.

**Important**  
In order to use SageMaker Clarify Foundation Model Evaluations, you must upgrade to the new Studio experience. As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The foundation evaluation feature can only be used in the updated experience. For information about how to update Studio, see [Migration from Amazon SageMaker Studio Classic](studio-updated-migrate.md). For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

Amazon SageMaker JumpStart has integrations with SageMaker Clarify Foundation Model Evaluations (FMEval) in Studio. If a JumpStart model has built-in evaluation capabilities available, you can choose **Evaluate** in the upper right corner of the model detail page in the JumpStart Studio UI. For more information on navigating the JumpStart Studio UI, see [Open JumpStart in Studio](studio-jumpstart.md#jumpstart-open-studio),

Use Amazon SageMaker JumpStart to evaluate text-based foundation models with FMEval. You can use these model evaluations to compare model quality and responsibility metrics for one model, between two models, or between different versions of the same model, to help you quantify model risks. FMEval can evaluate text-based models that perform the following tasks:
+  **Open-ended generation** – The production of natural human responses to text that does not have a pre-defined structure.
+  **Text summarization** – The generation of a concise and condensed summary while retaining the meaning and key information contained in larger text.
+  **Question Answering** – The generation of an answer in natural language to a question.
+  **Classification ** – The assignment of a class, such as `positive` versus `negative` to a text passage based on its content.

You can use FMEval to automatically evaluate model responses based on specific benchmarks. You can also evaluate model responses against your own criteria by bringing your own prompt datasets. FMEval provides a user interface (UI) that guides you through the setup and configuration of an evaluation job. You can also use the FMEval library inside your own code.

Every evaluation requires quota for two instances:
+ Hosting instance – An instance that hosts and deploys an LLM.
+ Evaluation instance – An instance that is used to prompt and perform an evaluation of an LLM on the hosting instance.

If your LLM is already deployed, provide the endpoint, and SageMaker AI will use your **hosting instance** to host and deploy the LLM.

If you are evaluating a JumpStart model that is not yet deployed to your account, FMEval creates a temporary **hosting instance** for you in your account, and keeps it deployed only for the length of your evaluation. FMEval uses the default instance that JumpStart recommends for the chosen LLM as your hosting instance. You must have sufficient quota for this recommended instance.

Every evaluation also uses an evaluation instance to provide prompts to and score the responses from the LLM. You must also have sufficient quota and memory to run the evaluation algorithms. The quota and memory requirements of the evaluation instance are generally smaller than those required for a hosting instance. We recommend selecting the `ml.m5.2xlarge` instance. For more information about quota and memory, see [Resolve errors when creating a model evaluation job in Amazon SageMaker AI](clarify-foundation-model-evaluate-troubleshooting.md).

Automatic evaluations can be used to score LLMs across the following dimensions:
+ Accuracy – For text summarization, question answering, and text classification
+ Semantic robustness – For open-ended generation, text summarization and text classification tasks
+ Factual knowledge – For open-ended generation
+ Prompt stereotyping – For open-ended generation 
+  Toxicity – For open-ended generation, text summarization, and question answering

You can also use human evaluations to manually evaluate model responses. The FMEval UI guides you through a workflow of selecting one or more models, provisioning resources, and writing instructions for and contacting your human workforce. After the human evaluation is complete, the results are displayed in FMEval.

You can access model evaluation through the JumpStart landing page in Studio by selecting a model to evaluate and then choosing **Evaluate**. Note that not all JumpStart models have evaluation capabilities available. For more information about how to configure, provision and run FMEval, see [What are Foundation Model Evaluations?](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-foundation-model-evaluate.html)

# Example notebooks
<a name="jumpstart-foundation-models-example-notebooks"></a>

For step-by-step examples on how to use publicly available JumpStart foundation models with the SageMaker Python SDK, refer to the following notebooks on text generation, image generation, and model customization.

**Note**  
Proprietary and publicly available JumpStart foundation models have different SageMaker AI Python SDK deployment workflows. Discover proprietary foundation model example notebooks through Amazon SageMaker Studio Classic or the SageMaker AI console. For more information, see [JumpStart foundation model usage](jumpstart-foundation-models-use.md).

You can clone the [Amazon SageMaker AI examples repository](https://github.com/aws/amazon-sagemaker-examples/tree/main/introduction_to_amazon_algorithms/jumpstart-foundation-models) to run the available JumpStart foundation model examples in the Jupyter environment of your choice within Studio. For more information on applications that you can use to create and access Jupyter in SageMaker AI, see [Applications supported in Amazon SageMaker Studio](studio-updated-apps.md).

## Time series forecasting
<a name="jumpstart-foundation-models-example-notebooks-time-series"></a>

You can use the Chronos models to forecast time series data. They're based on the language model architecture. Use the [Introduction to SageMaker JumpStart - Time Series Forecasting with Chronos](https://github.com/aws/amazon-sagemaker-examples/blob/default/%20%20%20%20generative_ai/sm-jumpstart_time_series_forecasting.ipynb) notebook to get started.

For information about the available Chronos models, see [Available foundation models](jumpstart-foundation-models-latest.md).

## Text generation
<a name="jumpstart-foundation-models-example-notebooks-text-generation"></a>

Explore text generation example notebooks, including guidance on general text generation workflows, multilingual text classification, real-time batch inference, few-shot learning, chatbot interactions, and more. 
+ [SageMaker JumpStart Foundation Models - HuggingFace Text2Text Generation with FLAN-T5 XL as an example](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/text2text-generation-flan-t5.html)
+ [SageMaker JumpStart Foundation Models - BloomZ: Multilingual Text Classification, Question and Answering, Code Generation, Paragraph rephrase, and More](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/text2text-generation-bloomz.html)
+ [SageMaker JumpStart Foundation Models - HuggingFace Text2Text Generation Batch Transform and Real-Time Batch Inference](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/text2text-generation-Batch-Transform.html)
+ [SageMaker JumpStart Foundation Models - GPT-J, GPT-Neo Few-shot learning](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-few-shot-learning.html)
+ [SageMaker JumpStart Foundation Models - Chatbots](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-chatbot.html)
+ [Introduction to SageMaker JumpStart - Text Generation with Mistral models](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/mistral-7b-instruction-domain-adaptation-finetuning.html)
+ [Introduction to SageMaker JumpStart - Text Generation with Falcon models](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/falcon-7b-instruction-domain-adaptation-finetuning.html)

## Image generation
<a name="jumpstart-foundation-models-example-notebooks-image-generation"></a>

Get started with text-to-image Stable Diffusion models, learn how to deploy an inpainting model, and experiment with a simple workflow to generate images of your dog. 
+ [Introduction to JumpStart - Text to Image](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart_text_to_image/Amazon_JumpStart_Text_To_Image.html)
+ [Introduction to JumpStart Image editing - Stable Diffusion Inpainting](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart_inpainting/Amazon_JumpStart_Inpainting.html)
+ [Generate fun images of your dog](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart_text_to_image/custom_dog_image_generator.html)

## Model customization
<a name="jumpstart-foundation-models-example-notebooks-model-customization"></a>

Sometimes your use case requires greater foundation model customization for specific tasks. For more information on model customization approaches, see [Foundation model customization](jumpstart-foundation-models-customize.md) or explore one of the following example notebooks. 
+ [SageMaker JumpStart Foundation Models - Fine-tuning text generation GPT-J 6B model on domain specific dataset](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/domain-adaption-finetuning-gpt-j-6b.html)
+ [SageMaker JumpStart Foundation Models - HuggingFace Text2Text Instruction Fine-Tuning](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/instruction-fine-tuning-flan-t5.html)
+ [Retrieval-Augmented Generation: Question Answering using LangChain and Cohere’s Generate and Embedding Models from SageMaker JumpStart](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_Cohere+langchain_jumpstart.html)
+ [Retrieval-Augmented Generation: Question Answering using LLama-2, Pinecone and Custom Dataset](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_pinecone_llama-2_jumpstart.html)
+ [Retrieval-Augmented Generation: Question Answering based on Custom Dataset with Open-sourced LangChain Library](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_langchain_jumpstart.html)
+ [Retrieval-Augmented Generation: Question Answering based on Custom Dataset](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_jumpstart_knn.html)
+ [Retrieval-Augmented Generation: Question Answering using Llama-2 and Text Embedding Models](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/question_answering_text_embedding_llama-2_jumpstart.html)
+ [Amazon SageMaker JumpStart - Text Embedding and Sentence Similarity](https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/question_answering_retrieval_augmented_generation/text-embedding-sentence-similarity.html)

# Private curated hubs for foundation model access control in JumpStart
<a name="jumpstart-curated-hubs"></a>

Curate pretrained JumpStart foundation models for your organization with private hubs. Use the latest publicly available and proprietary foundation models while enforcing governance guardrails and ensuring that your organization can only access approved models.

Use private model hubs to share models and notebooks, centralize model artifacts, improve model discoverability, and streamline model use within your organization. Administrators can create private hubs that include subsets of models tailored to different teams, use cases, or security requirements. Administrators can create a JumpStart private model hub using the SageMaker Python SDK. Users can then browse, train, and deploy the curated set of models using Amazon SageMaker Studio or the SageMaker Python SDK.

For more information on creating a private model hub, see [Admin guide for private model hubs in Amazon SageMaker JumpStart](jumpstart-curated-hubs-admin-guide.md).

For more information on sharing private model hubs across accounts, see [Cross-account sharing for private model hubs with AWS Resource Access Manager](jumpstart-curated-hubs-ram.md).

For more information on accessing a private model hub, see [User guide](jumpstart-curated-hubs-user-guide.md).

# Admin guide for private model hubs in Amazon SageMaker JumpStart
<a name="jumpstart-curated-hubs-admin-guide"></a>

There are actions that administrators can take related to curated model hubs that users within your organization can access. This includes creating, adding, deleting, and managing access of private hubs. This page also includes information about the supported AWS Regions for curated private hubs, as well as the prerequisites needed to use curated private model hubs. 

## Supported AWS Regions
<a name="jumpstart-curated-hubs-admin-guide-regions"></a>

Curated private hubs are currently generally available in the following AWS commercial Regions:
+ us-east-1
+ us-east-2
+ us-west-2
+ eu-west-1
+ eu-central-1
+ ap-northeast-1
+ ap-northeast-2
+ ap-south-1
+ ap-southeast-1
+ ap-southeast-2
+ il-central-1 (SDK only)

The default maximum number of hubs allowed in a single Region is 50.

## Prerequisites
<a name="jumpstart-curated-hubs-admin-guide-prerequisites"></a>

To use a curated private hub in Studio, you must have the following prerequisites:
+ An AWS account with administrator access
+ An AWS Identity and Access Management (IAM) role with access to Amazon SageMaker Studio
+ An Amazon SageMaker AI domain with JumpStart enabled
+ If your users try to use proprietary models, they must have subscriptions to those models in AWS Marketplace.
+ AWS accounts that are deploying proprietary models must have subscriptions to those models in AWS Marketplace.

For more information on getting started with Studio, see [Amazon SageMaker Studio](studio-updated.md).

# Create a private model hub
<a name="jumpstart-curated-hubs-admin-guide-create"></a>

Use the following steps to create a private hub to manage access control for pretrained JumpStart foundation models for your organization. You must intstall the SageMaker Python SDK and configure the necessary IAM permissions before creating a model hub.

**Create a private hub**

1. Install the SageMaker Python SDK and import the necessary Python packages.

   ```
   # Install the SageMaker Python SDK
   !pip3 install sagemaker --force-reinstall --quiet
   
   # Import the necessary Python packages
   import boto3
   from sagemaker import Session
   from sagemaker.jumpstart.hub.hub import Hub
   ```

1. Initialize a SageMaker AI Session.

   ```
   sm_client = boto3.client('sagemaker')
   session = Session(sagemaker_client=sm_client)
   session.get_caller_identity_arn()
   ```

1. Configure the details of your private hub such as the internal hub name, UI display name, and UI hub description.
**Note**  
If you do not specify an Amazon S3 bucket name when creating your hub, the SageMaker hub service creates a new bucket on your behalf. The new bucket has the following naming structure: `sagemaker-hubs-REGION-ACCOUNT_ID`.

   ```
   HUB_NAME="Example-Hub"
   HUB_DISPLAY_NAME="Example Hub UI Name"
   HUB_DESCRIPTION="A description of the example private curated hub."
   REGION="us-west-2"
   ```

1. Check that your **Admin** IAM role has the necessary Amazon S3 permissions to create a private hub. If your role does not have the necessary permissions, navigate to the **Roles** page in the IAM console. Choose the **Admin** role and then choose **Add permissions** in the **Permissions policies** pane to create an inline policy with the following permissions using the JSON editor:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Action": [
                   "s3:ListBucket",
                   "s3:GetObject",
                   "s3:GetObjectTagging"
               ],
               "Resource": [
                   "arn:aws:s3:::jumpstart-cache-prod-REGION",
                   "arn:aws:s3:::jumpstart-cache-prod-REGION/*"
               ],
               "Effect": "Allow"
           }
       ]
   }
   ```

------

1. Create a private model hub using your configurations from **Step 3** using `hub.create()`. 

   ```
   hub = Hub(hub_name=HUB_NAME, sagemaker_session=session)
   
   try:
   # Create the private hub
     hub.create(
         description=HUB_DESCRIPTION,
         display_name=HUB_DISPLAY_NAME
     )
     print(f"Successfully created Hub with name {HUB_NAME} in {REGION}")
   # Check that no other hubs with this internal name exist
   except Exception as e:
     if "ResourceInUse" in str(e):
       print(f"A hub with the name {HUB_NAME} already exists in your account.")
     else:
       raise e
   ```

1. Verify the configuration of your new private hub with the following `describe` command:

   ```
   hub.describe()
   ```

# Add models to a private hub
<a name="jumpstart-curated-hubs-admin-guide-add-models"></a>

After creating a private hub, you can then add allow-listed models. For the full list of available JumpStart models, see the [Built-in Algorithms with pre-trained Model Table](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html) in the SageMaker Python SDK reference.

1. You can filter through the available models programmatically using the `hub.list_sagemaker_public_hub_models()` method. You can optionally filter by categories such as framework (`"framework == pytorch"`), tasks such as image classification (`"task == ic"`), and more. For more information about filters, see [https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/jumpstart/notebook_utils.py](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/jumpstart/notebook_utils.py). The filter parameter in the `hub.list_sagemaker_public_hub_models()` method is optional. 

   ```
   filter_value = "framework == meta"
   response = hub.list_sagemaker_public_hub_models(filter=filter_value)
   models = response["hub_content_summaries"]
   while response["next_token"]:
       response = hub.list_sagemaker_public_hub_models(filter=filter_value, next_token=response["next_token"])
       models.extend(response["hub_content_summaries"])
   
   print(models)
   ```

1. You can then add the filtered models by specifying the model ARN in the `hub.create_model_reference()` method.

   ```
   for model in models:
       print(f"Adding {model.get('hub_content_name')} to Hub")
       hub.create_model_reference(model_arn=model.get("hub_content_arn"), model_name=model.get("hub_content_name"))
   ```

# Update resources in a private hub
<a name="jumpstart-curated-hubs-update"></a>

You can update resources in your private hub to make changes to their metadata. The resources that you can update include model references to Amazon SageMaker JumpStart models, custom models, notebooks, datasets, and JsonDoc.

When updating model, notebook, datasets, or JsonDoc resources, you can update the content description, display name, keywords, and support status. When updating model references to JumpStart models, you can only update the field specifying the minimum model version that you'd like to use.
+ “Update model or notebook resources” to include DataSet/JsonDoc. In CLI command, DataSets/JsonDocs should added to the hub-content-type argument.

Follow the section specific to the resource that you want to update.

## Update model or notebook resources
<a name="jumpstart-curated-hubs-update-model-notebook"></a>

To update a model or a notebook resource, use the [UpdateHubContent](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateHubContent.html) API.

The valid metadata fields that you can update with this API are the following:
+ `HubContentDescription` – The description of the resource.
+ `HubContentDisplayName` – The display name of the resource.
+ `HubContentMarkdown` – The description of the resource, in Markdown formatting.
+ `HubContentSearchKeywords` – The searchable keywords of the resource.
+ `SupportStatus` – The current status of the resource.

In your request, include a change for one or more of the preceding fields. If you attempt to update any other fields, such as the hub content type, you receive an error.

------
#### [ AWS SDK for Python (Boto3) ]

The following example shows how you can use the AWS SDK for Python (Boto3) to submit an [ UpdateHubContent](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateHubContent.html) request.

**Note**  
The `HubContentVersion` you specify in the request means that the specific version's metadata is updated. To find all of the available versions of your hub content, you can use the [ ListHubContentVersions](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListHubContentVersions.html) API.

```
import boto3
sagemaker_client = boto3.Session(region_name=<AWS-region>).client("sagemaker")

sagemaker_client.update_hub_contents(
    HubName=<hub-name>,
    HubContentName=<resource-content-name>,
    HubContentType=<"Model"|"Notebook">,
    HubContentVersion='1.0.0', # specify the correct version that you want to update
    HubContentDescription=<updated-description-string>
)
```

------
#### [ AWS CLI ]

The following example shows how you can use the AWS CLI to submit an [ update-hub-content](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-hub-content.html) request.

```
aws sagemaker update-hub-content \
--hub-name <hub-name> \
--hub-content-name <resource-content-name> \
--hub-content-type <"Model"|"Notebook"> \
--hub-content-version "1.0.0" \
--hub-content-description <updated-description-string>
```

------

## Update model references
<a name="jumpstart-curated-hubs-update-model-reference"></a>

To update a model reference to a JumpStart model, use the [ UpdateHubContentReference](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateHubContentReference.html) API.

You can only update the `MinVersion` field for model references.

------
#### [ AWS SDK for Python (Boto3) ]

The following example shows how you can use the AWS SDK for Python (Boto3) to submit an [ UpdateHubContentReference](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateHubContentReference.html) request.

```
import boto3
sagemaker_client = boto3.Session(region_name=<AWS-region>).client("sagemaker")

update_response = sagemaker_client.update_hub_content_reference(
    HubName=<hub-name>,
    HubContentName=<model-reference-content-name>,
    HubContentType='ModelReference',
    MinVersion='1.0.0'
)
```

------
#### [ AWS CLI ]

The following example shows how you can use the AWS CLI to submit an [ update-hub-content-reference](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sagemaker/update-hub-content-reference.html) request.

```
aws sagemaker update-hub-content-reference \
 --hub-name <hub-name> \
 --hub-content-name <model-reference-content-name> \
 --hub-content-type "ModelReference" \
 --min-version "1.0.0"
```

------

# Cross-account sharing for private model hubs with AWS Resource Access Manager
<a name="jumpstart-curated-hubs-ram"></a>

After creating a private model hub, you can share the hub to the necessary accounts using AWS Resource Access Manager (AWS RAM). For more information on creating a private hub, see [Create a private model hub](jumpstart-curated-hubs-admin-guide-create.md). The following page gives in-depth information about managed permissions related to private hubs within AWS RAM. For information about how to create a resource share within AWS RAM, see [Set up cross-account hub sharing](jumpstart-curated-hubs-ram-setup.md).

## Managed permissions for curated private hubs
<a name="jumpstart-curated-hubs-ram-permissions"></a>

The available access permissions are read, read and use, and full access permissions. The permission name, description, and list of specific APIs available for each permission are listed in the following:
+ Read permission (`AWSRAMPermissionSageMaker AIHubRead`): The read privilege allows resource consumer accounts to read contents in the shared hubs and view details and metadata. 
  + `DescribeHub`: Retrieves details about a hub and its configuration
  + `DescribeHubContent`: Retrieves details about a model available in a specific hub
  + `ListHubContent`: Lists all models available in a hub
  + `ListHubContentVersions`: Lists the version of all models available in a hub
+ Read and use permission (`AWSRAMPermissionSageMaker AIHubReadAndUse`): The read and use privilege allows resource consumer accounts to read contents in the shared hubs and deploy available models for inference. 
  + `DescribeHub`: Retrieves details about a hub and its configuration
  + `DescribeHubContent`: Retrieves details about a model available in a specific hub
  + `ListHubContent`: Lists all models available in a hub
  + `ListHubContentVersions`: Lists the version of all models available in a hub
  + `DeployHubModel`: Allows access to deploy available open-weight hub models for inference
+ Full access permission (`AWSRAMPermissionSageMaker AIHubFullAccessPolicy`): The full access privilege allows resource consumer accounts to read contents in the shared hubs, add and remove hub content, and deploy available models for inference. 
  + `DescribeHub`: Retrieves details about a hub and its configuration
  + `DescribeHubContent`: Retrieves details about a model available in a specific hub
  + `ListHubContent`: Lists all models available in a hub
  + `ListHubContentVersions`: Lists the version of all models available in a hub
  + `ImportHubContent`: Imports hub content 
  + `DeleteHubContent`: Deletes hub content
  + `CreateHubContentReference`: Creates a hub content reference that shares a model from the SageMaker AI **Public models** hub to a private hub 
  + `DeleteHubContentReference`: Delete a hub content reference that shares a model from the SageMaker AI **Public models** hub to a private hub 
  + `DeployHubModel`: Allows access to deploy available open-weight hub models for inference

`DeployHubModel` permissions are not required for proprietary models.

# Set up cross-account hub sharing
<a name="jumpstart-curated-hubs-ram-setup"></a>

SageMaker uses [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) to help you securely share your private hubs across accounts. Set up cross-account hub sharing using the following instructions along with the [Sharing your AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-create) instructions in the *AWS RAM User Guide*.

**Create a resource share**

1. Select **Create resource share** through the [AWS RAM console](https://console.aws.amazon.com/ram/home).

1. When specifying resource share details, choose the **SageMaker Hubs** resource type and select one more more private hubs that you want to share. When you share a hub with any other account, all of its contents are also shared implicitly. 

1. Associate permissions with your resources share. For more information about managed permissions, see [Managed permissions for curated private hubs](jumpstart-curated-hubs-ram.md#jumpstart-curated-hubs-ram-permissions)

1. Use AWS account IDs to specify the accounts to which you want to grant access to your shared resources.

1. Review your resource share configuration and select **Create resource share**. It may take a few minutes for the resource share and principal associations to complete.

For more information, see [Sharing your AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html) in the *AWS Resource Access Manager User Guide*.

After the resource share and principal associations are set, the specified AWS accounts receive an invitation to join the resource share. The AWS accounts must accept the invite to gain access to any shared resources.

For more information on accepting a resource share invite through AWS RAM, see [Using shared AWS resources ](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-shared.html)in the *AWS Resource Access Manager User Guide*.

# Delete models from a private hub
<a name="jumpstart-curated-hubs-admin-guide-delete-models"></a>

You can delete models from a private hub used by your organization by specifying the model ARN in the `hub.delete_model_reference()` method. This removes access to the model from the private hub.

```
hub.delete_model_reference(model-name)
```

# Restrict access to JumpStart gated models
<a name="jumpstart-curated-hubs-gated-model-access"></a>

Amazon SageMaker JumpStart provides access to both publicly available and proprietary foundation models. There are certain gated models in private Amazon S3 buckets that require you to have accepted the model's EULA (end user license agreement) in order to access them. For more information, see [EULA acceptance with the SageMaker Python SDK](jumpstart-foundation-models-choose.md#jumpstart-foundation-models-choose-eula-python-sdk).

The current default behavior is that if a user accepts a model's EULA, then the user can access the model and create [ fine-tuning training jobs](jumpstart-foundation-models-use-python-sdk-estimator-class.md). However, if you're an administrator and would like to restrict fine-tuning access to these gated models, you can set a policy that denies permissions to use the `CreateTrainingJob` action whenever the request is to a gated model.

The following is an example AWS Identity and Access Management (IAM) policy that an administrator can add to a user's IAM role:

```
{
    "Effect": "Deny",
    "Action": "sagemaker:CreateTrainingJob",
    "Resource": "*",
    "Condition": {
        "Bool": {
            "sagemaker:DirectGatedModelAccess": "true"
        }
    }
}
```

If you want to grant users access to specific models without providing unrestricted access to the gated models, set up a curated hub and add the specific models to the hub. For more information, see [Private curated hubs for foundation model access control in JumpStart](jumpstart-curated-hubs.md).

# Remove access to the SageMaker **Public models** hub
<a name="jumpstart-curated-hubs-admin-guide-remove-public-hub"></a>

In addition to adding a private curated hub to JumpStart in Studio, you can also remove access to the SageMaker **Public models** hub for your users. The SageMaker **Public models** hub has access to all available JumpStart foundation models. 

If you remove access to the SageMaker **Public models** hub and a user has access to only one private hub, then the user is taken directly into that private hub when they choose **JumpStart** in the left navigation pane in Studio. If a user has access to multiple private hubs, then the user is taken to a **Hubs** menu page when they choose **JumpStart** in the left navigation pane in Studio.

Remove access to the SageMaker **Public models** hub for your users with the following inline policy: 

**Note**  
You can specify any additional Amazon S3 buckets that you want your hub to access in the policy below. Be sure to replace *`REGION`* with the Region of your hub.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": "s3:*",
            "Effect": "Deny",
            "NotResource": [
                "arn:aws:s3:::jumpstart-cache-prod-us-east-1/*.ipynb",
                "arn:aws:s3:::jumpstart-cache-prod-us-east-1/*eula*",
                "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ]
        },
        {
            "Action": "sagemaker:*",
            "Effect": "Deny",
            "Resource": [
                "arn:aws:sagemaker:us-east-1:aws:hub/SageMakerPublicHub",
                "arn:aws:sagemaker:us-east-1:aws:hub-content/SageMakerPublicHub/*/*"
            ]
        }
    ]
}
```

------

# Delete a private hub
<a name="jumpstart-curated-hubs-admin-guide-delete"></a>

You can delete a private hub from your admin account. Before deleting a private hub, you must first remove any content in that hub. Delete hub contents and hubs with the following commands: 

```
# List the model references in the private hub
response = hub.list_models()
models = response["hub_content_summaries"]
while response["next_token"]:
    response = hub.list_models(next_token=response["next_token"])
    models.extend(response["hub_content_summaries"])

# Delete all model references in the hub
for model in models:
    hub.delete_model_reference(model_name=model.get('HubContentName'))

# Delete the private hub
hub.delete()
```

# Troubleshooting
<a name="jumpstart-curated-hubs-admin-guide-troubleshooting"></a>

The following sections give information about IAM permissions issues that might arise when creating a private model hub, as well as informationa bout how to resolve those issues.

**`ValidationException` when calling the `CreateModel` operation: Could not access model data**

This exception arises when you do not have the appropriate Amazon S3 permissions configured for your **Admin** role. For more information on the Amazon S3 permissions needed to create a private hub, see **Step 3** in [Create a private model hub](jumpstart-curated-hubs-admin-guide-create.md).

**`Access Denied` or `Forbidden` when calling `create()`**

You are denied access when creating a private hub if you do not have the appropriate permissions to access the Amazon S3 bucket associated with the SageMaker **Public models** hub. For more information on the Amazon S3 permissions needed to create a private hub, see **Step 3** in [Create a private model hub](jumpstart-curated-hubs-admin-guide-create.md).

# User guide
<a name="jumpstart-curated-hubs-user-guide"></a>

The following topics cover accessing and using models in your Amazon SageMaker JumpStart curated model hubs. Learn how to access your curated hub models through the Amazon SageMaker Studio interface or programmatically with the SageMaker Python SDK. Additionally, learn how to fine-tune curated hub models to adapt them for your specific use cases and business needs.

**Topics**
+ [Access curated model hubs in Amazon SageMaker JumpStart](jumpstart-curated-hubs-access-hubs.md)
+ [Fine-tune curated hub models](jumpstart-curated-hubs-fine-tune.md)

# Access curated model hubs in Amazon SageMaker JumpStart
<a name="jumpstart-curated-hubs-access-hubs"></a>

You can access a private model hub either through Studio or through the SageMaker Python SDK.

## Access your private model hub in Studio
<a name="jumpstart-curated-hubs-user-guide-studio"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see [Amazon SageMaker Studio Classic](studio.md).

In Amazon SageMaker Studio, open the JumpStart landing page either through the **Home** page or the **Home** menu on the left-side panel. This opens the **SageMaker JumpStart** landing page where you can explore model hubs and search for models.
+ From the **Home** page, choose **JumpStart** in the **Prebuilt and automated solutions** pane. 
+ From the **Home** menu in the left panel, navigate to the **JumpStart** node.

For more information on getting started with Amazon SageMaker Studio, see [Amazon SageMaker Studio](studio-updated.md).

From the **SageMaker JumpStart** landing page in Studio, you can explore any private model hubs that include allow-listed models for your organization. If you only have access to one model hub, then the **SageMaker JumpStart** landing page takes you directly into that hub. If you have access to multiple hubs, you are taken to the **Hubs **page. 

For more information on fine-tuning, deploying, and evaluating models that you have access to in Studio, see [Use foundation models in Studio](jumpstart-foundation-models-use-studio-updated.md).

## Access your private model hub using the SageMaker Python SDK
<a name="jumpstart-curated-hubs-user-guide-sdk"></a>

You can access your private model hub using the SageMaker Python SDK. Your access to read, use, or edit your curated hub is provided by your administrator.

**Note**  
If a hub is shared across accounts, then the `HUB_NAME` must be the hub ARN. If a hub is not shared across accounts, then the `HUB_NAME` can be the hub name.

1. Install the SageMaker Python SDK and import the necessary Python packages.

   ```
   # Install the SageMaker Python SDK
       !pip3 install sagemaker --force-reinstall --quiet
       
       # Import the necessary Python packages
       import boto3
       from sagemaker import Session
       from sagemaker.jumpstart.hub.hub import Hub
       from sagemaker.jumpstart.model import JumpStartModel
       from sagemaker.jumpstart.estimator import JumpStartEstimator
   ```

1. Initalize a SageMaker AI session and connect to your private hub using the hub name and Region.

   ```
   # If a hub is shared across accounts, then the HUB_NAME must be the hub ARN
       HUB_NAME="Example-Hub-ARN" 
       REGION="us-west-2" 
       
       # Initialize a SageMaker session
       sm_client = boto3.client('sagemaker') 
       sm_runtime_client = boto3.client('sagemaker-runtime') 
       session = Session(sagemaker_client=sm_client, 
                           sagemaker_runtime_client=sm_runtime_client)
       
       # Initialize the private hub
       hub = Hub(hub_name=HUB_NAME, sagemaker_session=session)
   ```

1. After connecting to a private hub, you can list all available models in that hub using the following commands:

   ```
   response = hub.list_models()
       models = response["hub_content_summaries"]
       while response["next_token"]:
           response = hub.list_models(next_token=response["next_token"])
           models.extend(response["hub_content_summaries"])
           
       print(models)
   ```

1. You can get more information about a specific model using the model name with the following command:

   ```
   response = hub.describe_model(model_name="example-model")
       print(response)
   ```

For more information on fine-tuning and deploying models that you have access to using the SageMaker Python SDK, see [Use foundation models with the SageMaker Python SDK](jumpstart-foundation-models-use-python-sdk.md).

# Fine-tune curated hub models
<a name="jumpstart-curated-hubs-fine-tune"></a>

In your private curated model hub, you can run fine-tuning training jobs using your model references. Model references point to a publicly available JumpStart model in the SageMaker AI public hub, but you can fine-tune the model on your own data for your specific use case. After the fine-tuning job, you have access to the model weights that you can then use or deploy to an endpoint.

You can fine-tune curated hub models in just a few lines of code using the SageMaker Python SDK. For more general information on fine-tuning publicly available JumpStart models, see [Foundation models and hyperparameters for fine-tuning](jumpstart-foundation-models-fine-tuning.md).

## Prerequisites
<a name="jumpstart-curated-hubs-fine-tune-prereqs"></a>

In order to fine-tune a JumpStart model reference in your curated hub, do the following:

1. Make sure that your user's IAM role has the SageMaker AI `TrainHubModel` permission attached. For more information, see [ Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *AWS IAM User Guide*.

   You should attach a policy like the following example to your user's IAM role:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "VisualEditor0",
               "Effect": "Allow",
               "Action": "sagemaker:TrainHubModel",
               "Resource": "arn:aws:sagemaker:*:111122223333:hub/*"
           }
       ]
   }
   ```

------
**Note**  
If your curated hub is shared across accounts and the hub content is owned by another account, make sure that your `HubContent` (the model reference resource) has a resource-based IAM policy that also grants the `TrainHubModel` permission to the requesting account, as shown in the following example.  

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowCrossAccountSageMakerAccess",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:root"
               },
               "Action": [
                   "sagemaker:TrainHubModel"
               ],
               "Resource": [
                   "arn:aws:sagemaker:*:111122223333:hub/*"
               ]
           }
       ]
   }
   ```

1. Have a private curated hub with a model reference to a JumpStart model that you want to fine-tune. For more information about creating a private hub, see [Create a private model hub](jumpstart-curated-hubs-admin-guide-create.md). To learn how to add publicly available JumpStart models to your private hub, see [Add models to a private hub](jumpstart-curated-hubs-admin-guide-add-models.md).
**Note**  
The JumpStart model you choose should be fine-tunable. You can verify whether a model is fine-tunable by checking the [ Built-in Algorithms with Pre-trained Models Table](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html).

1. Have a training dataset that you want to use for fine-tuning the model. The dataset should be in the appropriate training format for the model that you want to fine-tune.

## Fine-tune a curated hub model reference
<a name="jumpstart-curated-hubs-fine-tune-pysdk"></a>

The following procedure shows you how to fine-tune a model reference in your private curated hub using the SageMaker Python SDK.

1. Make sure that you have the latest version (at least `2.242.0`) of the SageMaker Python SDK installed. For more information, see [ Use Version 2.x of the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/v2.html).

   ```
   !pip install --upgrade sagemaker
   ```

1. Import the AWS SDK for Python (Boto3) and the modules you'll need from the SageMaker Python SDK.

   ```
   import boto3
   from sagemaker.jumpstart.estimator import JumpStartEstimator
   from sagemaker.session import Session
   ```

1. Initialize a Boto3 session, a SageMaker AI client, and a SageMaker Python SDK session.

   ```
   sagemaker_client = boto3.Session(region_name=<AWS-region>).client("sagemaker")
   sm_session = Session(sagemaker_client=sagemaker_client)
   ```

1. Create a `JumpStartEstimator` and provide the JumpStart model ID, the name of your hub that contains the model reference, and your SageMaker Python SDK session. For a list of model IDs, see the [ Built-in Algorithms with Pre-trained Models Table](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html).

   Optionally, you can specify the `instance_type` and `instance_count` fields when creating the estimator. If you don't, the training job uses the default instance type and count for the model you're using.

   You can also optionally specify the `output_path` to the Amazon S3 location where you want to store the fine-tuned model weights. If you don't specify the `output_path`, then uses a default SageMaker AI Amazon S3 bucket for the region in your account, named with the following format: `sagemaker-<region>-<account-id>`.

   ```
   estimator = JumpStartEstimator(
       model_id="meta-textgeneration-llama-3-2-1b",
       hub_name=<your-hub-name>,
       sagemaker_session=sm_session, # If you don't specify an existing session, a default one is created for you
       # Optional: specify your desired instance type and count for the training job
       # instance_type = "ml.g5.2xlarge"
       # instance_count = 1
       # Optional: specify a custom S3 location to store the fine-tuned model artifacts
       # output_path: "s3://<output-path-for-model-artifacts>"
   )
   ```

1. Create a dictionary with the `training` key where you specify the location of your fine-tuning dataset. This example points to an Amazon S3 URI. If you have additional considerations, such as using local mode or multiple training data channels, see [ JumpStartEstimator.fit()](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html#sagemaker.jumpstart.estimator.JumpStartEstimator.fit) in the SageMaker Python SDK documentation for more information.

   ```
   training_input = {
       "training": "s3://<your-fine-tuning-dataset>"
   }
   ```

1. Call the estimator's `fit()` method and pass in your training data and your EULA acceptance (if applicable).
**Note**  
The following example sets `accept_eula=False.` You should manually change the value to `True` in order to accept the EULA.

   ```
   estimator.fit(inputs=training_input, accept_eula=False)
   ```

Your fine-tuning job should now begin.

You can check on your fine-tuning job by viewing your training jobs, either in the SageMaker AI console or by using the [ListTrainingJobs](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ListTrainingJobs.html) API.

You can access your fine-tuned model artifacts at the Amazon S3 `output_path` that was specified in the `JumpStartEstimator` object (either the default SageMaker AI Amazon S3 bucket for the region, or a custom Amazon S3 path you specified, if applicable).

# Amazon SageMaker JumpStart in Studio Classic
<a name="jumpstart-studio-classic"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

The following JumpStart features are only available in Amazon SageMaker Studio Classic.
+ [Task-Specific Models](jumpstart-models.md)
+ [Shared Models and Notebooks](jumpstart-content-sharing.md)
+ [End-to-end JumpStart solution templates](jumpstart-solutions.md)
+ [Amazon SageMaker JumpStart Industry: Financial](studio-jumpstart-industry.md)

# Task-Specific Models
<a name="jumpstart-models"></a>

JumpStart supports task-specific models across fifteen of the most popular problem types. Of the supported problem types, Vision and NLP-related types total thirteen. There are eight problem types that support incremental training and fine-tuning. For more information about incremental training and hyper-parameter tuning, see [SageMaker AI Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html).​ JumpStart also supports four popular algorithms for tabular data modeling.

You can search and browse models from the JumpStart landing page in Studio or Studio Classic. When you select a model, the model detail page provides information about the model, and you can train and deploy your model in a few steps. The description section describes what you can do with the model, the expected types of inputs and outputs, and the data type needed for fine-tuning your model. 

You can also programmatically utilize models with the [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html#use-prebuilt-models-with-sagemaker-jumpstart). For a list of all available models, see the [JumpStart Available Model Table](https://sagemaker.readthedocs.io/en/v2.132.0/doc_utils/pretrainedmodels.html).

The list of problem types and links to their example Jupyter notebooks are summarized in the following table.


| Problem types  | Supports inference with pre-trained models  | Trainable on a custom dataset  | Supported frameworks  | Example Notebooks  | 
| --- | --- | --- | --- | --- | 
| Image classification  | Yes  | Yes  |  PyTorch, TensorFlow  |  [Introduction to JumpStart - Image Classification](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_image_classification/Amazon_JumpStart_Image_Classification.ipynb)  | 
| Object detection  | Yes  | Yes  | PyTorch, TensorFlow, MXNet |  [Introduction to JumpStart - Object Detection](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_object_detection/Amazon_JumpStart_Object_Detection.ipynb)  | 
| Semantic segmentation  | Yes  | Yes  | MXNet  |  [Introduction to JumpStart - Semantic Segmentation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_semantic_segmentation/Amazon_JumpStart_Semantic_Segmentation.ipynb)  | 
| Instance segmentation  | Yes  | Yes  | MXNet  |  [Introduction to JumpStart - Instance Segmentation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_instance_segmentation/Amazon_JumpStart_Instance_Segmentation.ipynb)  | 
| Image embedding  | Yes  | No  | TensorFlow, MXNet |  [Introduction to JumpStart - Image Embedding](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_image_embedding/Amazon_JumpStart_Image_Embedding.ipynb)  | 
| Text classification  | Yes  | Yes  | TensorFlow |  [Introduction to JumpStart - Text Classification](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_classification/Amazon_JumpStart_Text_Classification.ipynb)  | 
| Sentence pair classification  | Yes  | Yes  | TensorFlow, Hugging Face |  [Introduction to JumpStart - Sentence Pair Classification](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_sentence_pair_classification/Amazon_JumpStart_Sentence_Pair_Classification.ipynb)  | 
| Question answering  | Yes  | Yes  | PyTorch, Hugging Face |  [Introduction to JumpStart – Question Answering](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_question_answering/Amazon_JumpStart_Question_Answering.ipynb)  | 
| Named entity recognition  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Named Entity Recognition](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_named_entity_recognition/Amazon_JumpStart_Named_Entity_Recognition.ipynb)  | 
| Text summarization  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Text Summarization](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_summarization/Amazon_JumpStart_Text_Summarization.ipynb)  | 
| Text generation  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Text Generation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_generation/Amazon_JumpStart_Text_Generation.ipynb)  | 
| Machine translation  | Yes  | No  | Hugging Face  |  [Introduction to JumpStart - Machine Translation](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_machine_translation/Amazon_JumpStart_Machine_Translation.ipynb)  | 
| Text embedding  | Yes  | No  | TensorFlow, MXNet |  [Introduction to JumpStart - Text Embedding](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_embedding/Amazon_JumpStart_Text_Embedding.ipynb)  | 
| Tabular classification  | Yes  | Yes  | LightGBM, CatBoost, XGBoost, AutoGluon-Tabular, TabTransformer, Linear Learner |  [Introduction to JumpStart - Tabular Classification - LightGBM, CatBoost](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/lightgbm_catboost_tabular/Amazon_Tabular_Classification_LightGBM_CatBoost.ipynb) [Introduction to JumpStart - Tabular Classification - XGBoost, Linear Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/xgboost_linear_learner_tabular/Amazon_Tabular_Classification_XGBoost_LinearLearner.ipynb) [Introduction to JumpStart - Tabular Classification - AutoGluon Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/autogluon_tabular/Amazon_Tabular_Classification_AutoGluon.ipynb) [Introduction to JumpStart - Tabular Classification - TabTransformer Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/tabtransformer_tabular/Amazon_Tabular_Classification_TabTransformer.ipynb)  | 
| Tabular regression  | Yes  | Yes  | LightGBM, CatBoost, XGBoost, AutoGluon-Tabular, TabTransformer, Linear Learner |  [Introduction to JumpStart - Tabular Regression - LightGBM, CatBoost](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/lightgbm_catboost_tabular/Amazon_Tabular_Regression_LightGBM_CatBoost.ipynb) [Introduction to JumpStart – Tabular Regression - XGBoost, Linear Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/xgboost_linear_learner_tabular/Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb) [Introduction to JumpStart – Tabular Regression - AutoGluon Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/autogluon_tabular/Amazon_Tabular_Regression_AutoGluon.ipynb) [Introduction to JumpStart – Tabular Regression - TabTransformer Learner](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/tabtransformer_tabular/Amazon_Tabular_Regression_TabTransformer.ipynb)  | 

# Deploy a Model
<a name="jumpstart-deploy"></a>

When you deploy a model from JumpStart, SageMaker AI hosts the model and deploys an endpoint that you can use for inference. JumpStart also provides an example notebook that you can use to access the model after it's deployed. 

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Note**  
Fore more information on JumpStart model deployment in Studio, see [Deploy a model in Studio](jumpstart-foundation-models-use-studio-updated-deploy.md)

## Model deployment configuration
<a name="jumpstart-config"></a>

After you choose a model, the model's tab opens. In the **Deploy Model** pane, choose **Deployment Configuration** to configure your model deployment. 

 ![\[The Deploy Model pane.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy.png) 

The default instance type for deploying a model depends on the model. The instance type is the hardware that the training job runs on. In the following example, the `ml.p2.xlarge` instance is the default for this particular BERT model. 

You can also change the endpoint name, add `key;value` resource tags, activate or deactive the `jumpstart-` prefix for any JumpStart resources related to the model, and specify an Amazon S3 bucket for storing model artifacts used by your SageMaker AI endpoint.

 ![\[JumpStart Deploy Model pane with Deployment Configuration open to select its settings.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-config.png) 

Choose **Security Settings** to specify the AWS Identity and Access Management (IAM ) role, Amazon Virtual Private Cloud (Amazon VPC), and encryption keys for the model.

 ![\[JumpStart Deploy Model pane with Security Settings open to select its settings.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security.png) 

## Model deployment security
<a name="jumpstart-config-security"></a>

When you deploy a model with JumpStart, you can specify an IAM role, Amazon VPC, and encryption keys for the model. If you don't specify any values for these entries: The default IAM role is your Studio Classic runtime role; default encryption is used; no Amazon VPC is used.

### IAM role
<a name="jumpstart-config-security-iam"></a>

You can select an IAM role that is passed as part of training jobs and hosting jobs. SageMaker AI uses this role to access training data and model artifacts. If you don't select an IAM role, SageMaker AI deploys the model using your Studio Classic runtime role. For more information about IAM roles, see [AWS Identity and Access Management for Amazon SageMaker AI](security-iam.md).

The role that you pass must have access to the resources that the model needs, and must include all of the following.
+ For training jobs: [CreateTrainingJob API: Execution Role Permissions](https://docs.aws.amazon.com//sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createtrainingjob-perms).
+ For hosting jobs: [CreateModel API: Execution Role Permissions](https://docs.aws.amazon.com//sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createmodel-perms).

**Note**  
You can scope down the Amazon S3 permissions granted in each of the following roles. Do this by using the ARN of your Amazon Simple Storage Service (Amazon S3) bucket and the JumpStart Amazon S3 bucket.  

```
[
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::jumpstart-cache-prod-<region>/*",
        "arn:aws:s3:::jumpstart-cache-prod-<region>",
        "arn:aws:s3:::<bucket>/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
           "cloudwatch:PutMetricData",
           "logs:CreateLogStream",
          "logs:PutLogEvents",
          "logs:CreateLogGroup",
          "logs:DescribeLogStreams",
          "ecr:GetAuthorizationToken"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ecr:BatchGetImage",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer"
      ],
      "Resource": [
        "*"
      ]
    },
  ]
}
```

**Find IAM role**

If you select this option, you must select an existing IAM role from the dropdown list.

 ![\[JumpStart Security Settings IAM section with Find IAM role selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-findiam.png) 

**Input IAM role**

If you select this option, you must manually enter the ARN for an existing IAM role. If your Studio Classic runtime role or Amazon VPC block the `iam:list* `call, you must use this option to use an existing IAM role.

 ![\[JumpStart Security Settings IAM section with Input IAM role selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-inputiam.png) 

### Amazon VPC
<a name="jumpstart-config-security-vpc"></a>

All JumpStart models run in network isolation mode. After the model container is created, no more calls can be made. You can select an Amazon VPC that is passed as part of training jobs and hosting jobs. SageMaker AI uses this Amazon VPC to push and pull resources from your Amazon S3 bucket. This Amazon VPC is different from the Amazon VPC that limits access to the public internet from your Studio Classic instance. For more information about the Studio Classic Amazon VPC, see [Connect Studio notebooks in a VPC to external resources](studio-notebooks-and-internet-access.md).

The Amazon VPC that you pass does not need access to the public internet, but it does need access to Amazon S3. The Amazon VPC endpoint for Amazon S3 must allow access to at least the following resources that the model needs.

```
{
  "Effect": "Allow",
  "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:ListMultipartUploadParts",
    "s3:ListBucket"
  ],
  "Resources": [
    "arn:aws:s3:::jumpstart-cache-prod-<region>/*",
    "arn:aws:s3:::jumpstart-cache-prod-<region>",
    "arn:aws:s3:::bucket/*"
  ]
}
```

If you do not select an Amazon VPC, no Amazon VPC is used.

**Find VPC**

If you select this option, you must select an existing Amazon VPC from the dropdown list. After you select an Amazon VPC, you must select a subnet and security group for your Amazon VPC. For more information about subnets and security groups, see [Overview of VPCs and subnets](https://docs.aws.amazon.com//vpc/latest/userguide/VPC_Subnets.html).

 ![\[JumpStart Security Settings VPC section with Find VPC selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-findvpc.png) 

**Input VPC**

If you select this option, you must manually select the subnet and security group that compose your Amazon VPC. If your Studio Classic runtime role or Amazon VPC blocks the `ec2:list*` call, you must use this option to select the subnet and security group.

 ![\[JumpStart Security Settings VPC section with Input VPC selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-inputvpc.png) 

### Encryption keys
<a name="jumpstart-config-security-encryption"></a>

You can select an AWS KMS key that is passed as part of training jobs and hosting jobs. SageMaker AI uses this key to encrypt the Amazon EBS volume for the container, and the repackaged model in Amazon S3 for hosting jobs and the output for training jobs. For more information about AWS KMS keys, see [AWS KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#kms_keys).

The key that you pass must trust the IAM role that you pass. If you do not specify an IAM role, the AWS KMS key must trust your Studio Classic runtime role.

If you do not select an AWS KMS key, SageMaker AI provides default encryption for the data in the Amazon EBS volume and the Amazon S3 artifacts.

**Find encryption keys**

If you select this option, you must select existing AWS KMS keys from the dropdown list.

 ![\[JumpStart Security Settings encryption section with Find encryption keys selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-findencryption.png) 

**Input encryption keys**

If you select this option, you must manually enter the AWS KMS keys. If your Studio Classic execution role or Amazon VPC block the `kms:list* `call, you must use this option to select existing AWS KMS keys.

 ![\[JumpStart Security Settings encryption section with Input encryption keys selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-deploy-security-inputencryption.png) 

## Configure default values for JumpStart models
<a name="jumpstart-config-defaults"></a>

You can configure default values for parameters such as IAM roles, VPCs, and KMS keys to pre-populate for JumpStart model deployment and training. After configuring default values, the Studio Classic UI automatically provides your specified security settings and tags to JumpStart models to simplify deployment and training workflows. Administrators and end-users can initialize default values specified in a configuration file in YAML format.

By default, the SageMaker Python SDK uses two configuration files: one for the administrator and one for the user. Using the admininistrator configuration file, administrators can define a set of default values. End-users can override values set in the administrator configuration file and set additional default values using the end-user configuration file. For more information, see [Default configuration file location](https://sagemaker.readthedocs.io/en/stable/overview.html#default-configuration-file-location). 

The following code sample lists the default locations of the configuration files when using the SageMaker Python SDK in Amazon SageMaker Studio Classic.

```
# Location of the admin config file
/etc/xdg/sagemaker/config.yaml

# Location of the user config file
/root/.config/sagemaker/config.yaml
```

Values specified in the user configuration file override values set in the administrator configuration file. The configuration file is unique to each user profile within an Amazon SageMaker AI domain. The user profile's Studio Classic application is directly associated with the user profile. For more information, see [Domain user profiles](domain-user-profile.md).

Administrators can optionally set configuration defaults for JumpStart model training and deployment through `JupyterServer` lifecycle configurations. For more information, see [Create and Associate a Lifecycle Configuration with Amazon SageMaker Studio Classic](studio-lcc-create.md).

### Default value configuration YAML file
<a name="jumpstart-config-defaults-yaml"></a>

Your configuration file should adhere to the SageMaker Python SDK [configuration file structure](https://sagemaker.readthedocs.io/en/stable/overview.html#configuration-file-structure). Note that specific fields in the `TrainingJob`, `Model`, and `EndpointConfig` configurations apply to JumpStart model training and deployment default values.

```
SchemaVersion: '1.0'
SageMaker:
  TrainingJob:
    OutputDataConfig:
      KmsKeyId: example-key-id
    ResourceConfig:
      # Training configuration - Volume encryption key
      VolumeKmsKeyId: example-key-id
    # Training configuration form - IAM role
    RoleArn: arn:aws:iam::123456789012:role/SageMakerExecutionRole
    VpcConfig:
      # Training configuration - Security groups
      SecurityGroupIds:
      - sg-1
      - sg-2
      # Training configuration - Subnets
      Subnets:
      - subnet-1
      - subnet-2
    # Training configuration - Custom resource tags
    Tags:
    - Key: Example-key
      Value: Example-value
  Model:
    EnableNetworkIsolation: true
    # Deployment configuration - IAM role
    ExecutionRoleArn: arn:aws:iam::123456789012:role/SageMakerExecutionRole
    VpcConfig:
      # Deployment configuration - Security groups
      SecurityGroupIds:
      - sg-1
      - sg-2
      # Deployment configuration - Subnets
      Subnets:
      - subnet-1
      - subnet-2
  EndpointConfig:
    AsyncInferenceConfig:
      OutputConfig:
        KmsKeyId: example-key-id
    DataCaptureConfig:
      # Deployment configuration - Volume encryption key
      KmsKeyId: example-key-id
    KmsKeyId: example-key-id
    # Deployment configuration - Custom resource tags
    Tags:
    - Key: Example-key
      Value: Example-value
```

# Fine-Tune a Model
<a name="jumpstart-fine-tune"></a>

Fine-tuning trains a pretrained model on a new dataset without training from scratch. This process, also known as transfer learning, can produce accurate models with smaller datasets and less training time. You can fine-tune a model if its card shows a **fine-tunable** attribute set to **Yes**. 

 ![\[JumpStart fine-tunable Image Classification - TensorFlow model\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-finetune-model.png) 

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Note**  
For more information on JumpStart model fine-tuning in Studio, see [Fine-tune a model in Studio](jumpstart-foundation-models-use-studio-updated-fine-tune.md)

## Fine-Tuning data source
<a name="jumpstart-fine-tune-data"></a>

 When you fine-tune a model, you can use the default dataset or choose your own data, which is located in an Amazon S3 bucket. 

To browse the buckets available to you, choose **Find S3 bucket**. These buckets are limited by the permissions used to set up your Studio Classic account. You can also specify an Amazon S3 URI by choosing **Enter Amazon S3 bucket location**. 

 ![\[JumpStart data source settings with default dataset selected.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-dataset.png) 

**Tip**  
 To find out how to format the data in your bucket, choose **Learn more**. The description section for the model has detailed information about inputs and outputs.  

 For text models: 
+  The bucket must have a data.csv file. 
+  The first column must be a unique integer for the class label. For example: `1`, `2`, `3`, `4`, `n`
+  The second column must be a string. 
+  The second column should have the corresponding text that matches the type and language for the model.  

 For vision models: 
+  The bucket must have as many subdirectories as the number of classes. 
+  Each subdirectory should contain images that belong to that class in .jpg format. 

**Note**  
 The Amazon S3 bucket must be in the same AWS Region where you're running SageMaker Studio Classic because SageMaker AI doesn't allow cross-Region requests. 

## Fine-Tuning deployment configuration
<a name="jumpstart-fine-tune-deploy"></a>

The p3 family is recommended as the fastest for deep learning training, and this is recommended for fine-tuning a model. The following chart shows the number of GPUs in each instance type. There are other available options that you can choose from, including p2 and g4 instance types. 


|  Instance type  |  GPUs  | 
| --- | --- | 
|  p3.2xlarge  |  1  | 
|  p3.8xlarge  |  4  | 
|  p3.16xlarge  |  8  | 
|  p3dn.24xlarge  |  8  | 

## Hyperparameters
<a name="jumpstart-hyperparameters"></a>

You can customize the hyperparameters of the training job that are used to fine-tune the model. The hyperparameters available for each fine-tunable model differ depending on the model. For information on each available hyperparameter, reference the hyperparameters documentation for the model of your choosing in [Built-in algorithms and pretrained models in Amazon SageMaker](algos.md). For example, see [Image Classification - TensorFlow Hyperparameters](IC-TF-Hyperparameter.md) for details on the fine-tunable Image Classification - TensorFlow hyperparameters.

If you use the default dataset for text models without changing the hyperparameters, you get a nearly identical model as a result. For vision models, the default dataset is different from the dataset used to train the pretrained models, so your model is different as a result. 

The following hyperparameters are common among models: 
+ **Epochs** – One epoch is one cycle through the entire dataset. Multiple intervals complete a batch, and multiple batches eventually complete an epoch. Multiple epochs are run until the accuracy of the model reaches an acceptable level, or when the error rate drops below an acceptable level. 
+ **Learning rate** – The amount that values should be changed between epochs. As the model is refined, its internal weights are being nudged and error rates are checked to see if the model improves. A typical learning rate is 0.1 or 0.01, where 0.01 is a much smaller adjustment and could cause the training to take a long time to converge, whereas 0.1 is much larger and can cause the training to overshoot. It is one of the primary hyperparameters that you might adjust for training your model. Note that for text models, a much smaller learning rate (5e-5 for BERT) can result in a more accurate model. 
+ **Batch size** – The number of records from the dataset that is to be selected for each interval to send to the GPUs for training. 

  In an image example, you might send out 32 images per GPU, so 32 would be your batch size. If you choose an instance type with more than one GPU, the batch is divided by the number of GPUs. Suggested batch size varies depending on the data and the model that you are using. For example, how you optimize for image data differs from how you handle language data. 

  In the instance type chart in the deployment configuration section, you can see the number of GPUs per instance type. Start with a standard recommended batch size (for example, 32 for a vision model). Then, multiply this by the number of GPUs in the instance type that you selected. For example, if you're using a `p3.8xlarge`, this would be 32(batch size) multiplied by 4 (GPUs), for a total of 128, as your batch size adjusts for the number of GPUs. For a text model like BERT, try starting with a batch size of 64, and then reduce as needed. 

 

## Training output
<a name="jumpstart-training"></a>

When the fine-tuning process is complete, JumpStart provides information about the model: parent model, training job name, training job ARN, training time, and output path. The output path is where you can find your new model in an Amazon S3 bucket. The folder structure uses the model name that you provided and the model file is in an `/output` subfolder and it's always named `model.tar.gz`.  

 Example: `s3://bucket/model-name/output/model.tar.gz` 

## Configure default values for model training
<a name="jumpstart-config-defaults-training"></a>

You can configure default values for parameters such as IAM roles, VPCs, and KMS keys to pre-populate for JumpStart model deployment and training. For more information, see, [Configure default values for JumpStart models](jumpstart-deploy.md#jumpstart-config-defaults).

# Share Models
<a name="jumpstart-share-models"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

You can share JumpStart models through the Studio Classic UI directly from the **Launched JumpStart assets** page using the following procedure:

1. Open Amazon SageMaker Studio Classic and choose **Launched JumpStart assets** in the **JumpStart** section of the lefthand navigation pane.

1. Select the **Training jobs** tab to view the list of your model training jobs.

1. Under the **Training jobs** list, select the training job that you want to share. This opens the training job details page. You cannot share more than one training job at a time.

1. In the header for the training job, choose **Share**, and select **Share with my organization**.

For more information about sharing models with your organization, see [Shared Models and Notebooks](jumpstart-content-sharing.md).

# Shared Models and Notebooks
<a name="jumpstart-content-sharing"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

Share your models and notebooks to centralize model artifacts, facilitate discoverability, and increase the reuse of models within your organization. When sharing your models, you can provide training and inference environment information and allow collaborators to use these environments for their own training and inference jobs. 

All models that you share and models that are shared with you are searchable in a centralized location directly in Amazon SageMaker Studio Classic. For information on the onboarding steps to sign into Amazon SageMaker Studio Classic, see [Onboard to Amazon SageMaker AI Domain](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html).

**Topics**
+ [Model and notebook sharing](jumpstart-content-sharing-access.md)
+ [Access shared content](jumpstart-content-sharing-access-filter.md)
+ [Add a model](jumpstart-content-sharing-add-model.md)

# Model and notebook sharing
<a name="jumpstart-content-sharing-access"></a>

To share models and notebooks, navigate to the **Shared models** section in Amazon SageMaker Studio Classic, choose **Shared by my organization**, and then select the **Add** dropdown list. Choose to either add a model or add a notebook. 

![\[The menu to add shared models or notebooks to JumpStart.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-shared-models.png)


# Access shared content
<a name="jumpstart-content-sharing-access-filter"></a>

From the Amazon SageMaker Studio Classic UI, you can access shared content and filter what you see.

There are three main options for filtering shared models and notebooks:

1. **Shared by me** – Models and notebooks that you shared to JumpStart.

1. **Shared with me** – Models and notebooks shared with you

1. **Shared by my organization** – All models and notebooks that are shared to anyone in your organization

You can also sort your models and notebooks based on the time they were last updated or by ascending or descending alphabetical order. Choose the filter icon (![\[Funnel or filter icon representing data filtering or narrowing down options.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-filter-icon.png)) to further sort your selections.

# Add a model
<a name="jumpstart-content-sharing-add-model"></a>

To add a model, choose **Shared by my organization**, and then select **Add model** from the **Add** dropdown list. Enter the basic information for your model, and add any training or inference information you want to share with collaborators to train or deploy your model. After you enter all the necessary information, choose **Add model** in the lower right corner of the screen.

**Topics**
+ [Add basic information](jumpstart-content-sharing-info.md)
+ [Enable training](jumpstart-content-sharing-training.md)
+ [Enable deployment](jumpstart-content-sharing-deployment.md)
+ [Add a notebook](jumpstart-content-sharing-notebooks.md)

# Add basic information
<a name="jumpstart-content-sharing-info"></a>

Adding a model in JumpStart involves providing some basic information about the model you want to train. This information helps define the characteristics and capabilities of your model, as well as improving its discoverability and searchability. To create a new model, follow these steps:

1. Add a title for this model. Adding a title automatically populates a unique identifier in the ID field based on the model title.

1. Add a description of the model.

1. Select a data type from the options: *text*, *vision*, *tabular*, or *audio*.

1. Select a machine learning task from the list of available tasks, such as *image classification* or *text generation*.

1. Select a machine learning framework.

1. Add metadata information with keywords or phrases to use when searching for a model. Use commas to separate keywords. Any spaces are automatically replaced with commas.

# Enable training
<a name="jumpstart-content-sharing-training"></a>

When adding a model to share, you can optionally provide a training environment and allow collaborators in your organization to train the shared model. 

**Note**  
If you are adding a tabular model, you also need to specify a column format and target column to enable training.

After providing the basic details about your model, you'll need to configure the settings for the training job that will be used to train your model. This involves specifying the container environment, code scripts, datasets, output locations, and various other parameters to control how the training job is executed. To configure the training job settings, follow these steps:

1. Add a container to use for model training. You can select a container used for an existing training job, bring your own container in Amazon ECR, or use an Amazon SageMaker Deep Learning Container.

1. Add environment variables.

1. Provide a training script location.

1. Provide a script mode entry point.

1. Provide an Amazon S3 URI for model artifacts generated during training.

1. Provide the Amazon S3 URI to the default training dataset.

1. Provide a model output path. The model output path should be the Amazon S3 URI path for any model artifacts generated from training. SageMaker AI saves the model artifacts as a single compressed TAR file in Amazon S3.

1. Provide a validation dataset to use for evaluating your model during training. Validation datasets must contain the same number of columns and the same feature headers as the training dataset.

1. Turn on network isolation. Network isolation isolates the model container so that no inbound or outbound network calls can be made to or from the model container.

1. Provide training channels through which SageMaker AI can access your data. For example, you might specify input channels named `train` or `test`. For each channel, specify a channel name and a URI to the location of your data. Choose **Browse** to search for Amazon S3 locations.

1. Provide hyperparameters. Add any hyperparameters with which collaborators should experiment during training. Provide a range of valid values for these hyperparameters. This range is used for training job hyperparameter validation. You can define ranges based on the datatype of the hyperparameter.

1. Select an instance type. We recommend a GPU instance with more memory for training with large batch sizes. For a comprehensive list of SageMaker training instances across AWS Regions, see the **On-Demand Pricing** table in [Amazon SageMaker Pricing.](https://aws.amazon.com/sagemaker/pricing/)

1. Provide metrics. Define metrics for a training job by specifying a name and a regular expression for each metric that your training monitors. Design the regular expressions to capture the values of metrics that your algorithm emits. For example, the metric `loss` might have the regular expression `"Loss =(.*?);"`.

# Enable deployment
<a name="jumpstart-content-sharing-deployment"></a>

When adding a model to share, you can optionally provide an inference environment in which collaborators in your organization can deploy the shared model for inference.

After training your machine learning model, you'll need to deploy it to an Amazon SageMaker AI endpoint for inference. This involves providing a container environment, an inference script, the model artifacts generated during training, and selecting an appropriate compute instance type. Configuring these settings properly is crucial for ensuring your deployed model can make accurate predictions and handle inference requests efficiently. To set up your model for inference, follow these steps:

1. Add a container to use for inference. You can bring your own container in Amazon ECR or use an Amazon SageMaker Deep Learning Container.

1. Provide the Amazon S3 URI to an inference script. Custom inference scripts run inside your chosen container. Your inference script should include a function for model loading, and optionally functions generating predictions, and input and output processing. For more information on creating inference scripts for the framework of your choice, see [Frameworks](https://sagemaker.readthedocs.io/en/stable/frameworks/index.html) in the SageMaker Python SDK documentation. For example, for TensorFlow, see [How to implement the pre- and/or post-processing handler(s)](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/deploying_tensorflow_serving.html#how-to-implement-the-pre-and-or-post-processing-handler-s).

1. Provide an Amazon S3 URI for model artifacts. Model artifacts are the output that results from training a model, and typically consist of trained parameters, a model definition that describes how to compute inferences, and other metadata. If you trained your model in SageMaker AI, the model artifacts are saved as a single compressed TAR file in Amazon S3. If you trained your model outside SageMaker AI, you need to create this single compressed TAR file and save it in an Amazon S3 location.

1. Select an instance type. We recommend a GPU instance with more memory for training with large batch sizes. For a comprehensive list of SageMaker training instances across AWS Regions, see the **On-Demand Pricing** table in [Amazon SageMaker Pricing](https://aws.amazon.com/sagemaker/pricing/).

# Add a notebook
<a name="jumpstart-content-sharing-notebooks"></a>

To add a notebook, choose **Shared by my organization**, and then select **Add notebook** from the **Add** dropdown list. Enter the basic information for your notebook and provide an Amazon S3 URI for the location of that notebook. 

First, add the basic descriptive information about your notebook. This information is used to improve the searchability of your notebook.

1. Add a title for this notebook. Adding a title automatically populates a unique identifier in the ID field based on the notebook title.

1. Add a description of the notebook.

1. Select a data type from the options: *text*, *vision*, *tabular*, or *audio*.

1. Select an ML task from the list of available tasks, such as *image classification* or *text generation*.

1. Select an ML framework.

1. Add metadata information with keywords or phrases to use when searching for a notebook. Use commas to separate keywords. Any spaces are automatically replaced with commas.

After you've specified the basic information, you can provide an Amazon S3 URI for the location of that notebook. You can choose **Browse** to search through your Amazon S3 buckets for your notebook file location. After you find your notebook, copy the Amazon S3 URI, choose **Cancel**, and then add the Amazon S3 URI to the **Notebook Location** field. 

After you enter all the necessary information, choose **Add notebook** in the lower right corner. 

# End-to-end JumpStart solution templates
<a name="jumpstart-solutions"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Note**  
JumpStart Solutions are only available in Studio Classic.

SageMaker JumpStart provides one-click, end-to-end solutions that are designed to address common machine learning use cases. They use proven algorithms for their domains and provide a complete workflow which typically includes data processing, model training, deployment, inference, and monitoring. Explore the following use cases for more information on available solution templates.
+ [Demand forecasting](#jumpstart-solutions-demand-forecasting)
+ [Credit rating prediction](#jumpstart-solutions-credit-prediction)
+ [Fraud detection](#jumpstart-solutions-fraud-detection)
+ [Computer vision](#jumpstart-solutions-computer-vision)
+ [Extract and analyze data from documents](#jumpstart-solutions-documents)
+ [Predictive maintenance](#jumpstart-solutions-predictive-maintenance)
+ [Churn prediction](#jumpstart-solutions-churn-prediction)
+ [Personalized recommendations](#jumpstart-solutions-recommendations)
+ [Reinforcement learning](#jumpstart-solutions-reinforcement-learning)
+ [Healthcare and life sciences](#jumpstart-solutions-healthcare-life-sciences)
+ [Financial pricing](#jumpstart-solutions-financial-pricing)
+ [Causal inference](#jumpstart-solutions-causal-inference)

Choose the solution template that best fits your use case from the JumpStart landing page. When you choose a solution template, JumpStart opens a new tab showing a description of the solution and a **Launch** button. When you select **Launch**, JumpStart creates all of the resources that you need to run the solution, including training and model hosting instances. For more information on launching a JumpStart solution, see [Launch a Solution](jumpstart-solutions-launch.md).

After launching the solution, you can explore solution features and any generated artifacts in JumpStart. Use the **Launched JumpStart assets** menu to find your solution. In your solution's tab, select **Open Notebook** to use provided notebooks and explore the solution’s features. When artifacts are generated during launch or after running the provided notebooks, they're listed in the **Generated Artifacts** table. You can delete individual artifacts with the trash icon (![\[The trash icon for JumpStart.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-trash.png)). You can delete all of the solution’s resources by choosing **Delete solution resources**.

## Demand forecasting
<a name="jumpstart-solutions-demand-forecasting"></a>

Demand forecasting uses historical time series data in order to make future estimations in relation to customer demand over a specific period and streamline the supply-demand decision-making process across businesses. 

Demand forecasting use cases include predicting ticket sales in the transportation industry, stock prices, number of hospital visits, number of customer representatives to hire for multiple locations in the next month, product sales across multiple regions in the next quarter, cloud server usage for the next day for a video streaming service, electricity consumption for multiple regions over the next week, number of IoT devices and sensors such as energy consumption, and more.

Time series data is categorized as *univariate* and *multi-variate*. For example, the total electricity consumption for a single household is a univariate time series over a period of time. When multiple univariate time series are stacked on each other, it’s called a multi-variate time series. For example, the total electricity consumption of 10 different (but correlated) households in a single neighborhood make up a multi-variate time series dataset.


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Demand forecasting  | Demand forecasting for multivariate time series data using three state-of-the-art time series forecasting algorithms: [LSTNet](https://ts.gluon.ai/stable/api/gluonts/gluonts.mx.model.lstnet.html), [Prophet](https://facebook.github.io/prophet/), and [SageMaker AI DeepAR](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html). |  [GitHub »](https://github.com/awslabs/sagemaker-deep-demand-forecast)  | 

## Credit rating prediction
<a name="jumpstart-solutions-credit-prediction"></a>

Use JumpStart's credit rating prediction solutions to predict corporate credit ratings or to explain credit prediction decisions made by machine learning models. Compared to traditional credit rating modeling methods, machine learning models can automate and improve the accuracy of credit prediction. 


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Corporate credit rating prediction  | Multimodal (long text and tabular) machine learning for quality credit predictions using AWS [AutoGluon Tabular](https://auto.gluon.ai/scoredebugweight/tutorials/tabular_prediction/index.html). | [GitHub »](https://github.com/awslabs/sagemaker-corporate-credit-rating) | 
| Graph-based credit scoring  | Predict corporate credit ratings using tabular data and a corporate network by training a [Graph Neural Network GraphSAGE](https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf) and AWS [AutoGluon Tabular](https://auto.gluon.ai/scoredebugweight/tutorials/tabular_prediction/index.html) model. | Find in Amazon SageMaker Studio Classic.  | 
| Explain credit decisions  | Predict credit default in credit applications and provide explanations using [LightGBM](https://lightgbm.readthedocs.io/en/latest/) and [SHAP (SHapley Additive exPlanations)](https://shap.readthedocs.io/en/latest/index.html). |  [GitHub »](https://github.com/awslabs/sagemaker-explaining-credit-decisions)  | 

## Fraud detection
<a name="jumpstart-solutions-fraud-detection"></a>

Many businesses lose billions annually to fraud. Machine learning based fraud detection models can help systematically identify likely fraudulent activities from a tremendous amount of data. The following solutions use transaction and user identity datasets to identify fraudulent transactions.


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Detect malicious users and transactions | Automatically detect potentially fraudulent activity in transactions using [SageMaker AI XGBoost](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html) with the over-sampling technique [Synthetic Minority Over-sampling](https://arxiv.org/abs/1106.1813) (SMOTE). |  [GitHub »](https://github.com/awslabs/fraud-detection-using-machine-learning)  | 
| Fraud detection in financial transactions using deep graph library | Detect fraud in financial transactions by training a [graph convolutional network](https://arxiv.org/pdf/1703.06103.pdf) with the [deep graph library](https://www.dgl.ai/) and a [SageMaker AI XGBoost](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html) model. |  [GitHub »](https://github.com/awslabs/sagemaker-graph-fraud-detection)  | 
| Financial payment classification | Classify financial payments based on transaction information using [SageMaker AI XGBoost](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html). Use this solution template as an intermediate step in fraud detection, personalization, or anomaly detection. |  Find in Amazon SageMaker Studio Classic.  | 

## Computer vision
<a name="jumpstart-solutions-computer-vision"></a>

With the rise of business use cases such as autonomous vehicles, smart video surveillance, healthcare monitoring and various object counting tasks, fast and accurate object detection systems are rising in demand. These systems involve not only recognizing and classifying every object in an image, but localizing each one by drawing the appropriate bounding box around it. In the last decade, the rapid advances of deep learning techniques greatly accelerated the momentum of object detection.


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Visual product defect detection | Identify defective regions in product images either by training an [object detection model from scratch](https://ieeexplore.ieee.org/document/8709818) or fine-tuning pretrained SageMaker AI models. |  [GitHub »](https://github.com/awslabs/sagemaker-defect-detection)  | 
| Handwriting recognition  | Recognize handwritten text in images by training an [object detection model](https://mxnet.apache.org/versions/1.0.0/api/python/gluon/model_zoo.html#mxnet.gluon.model_zoo.vision.resnet34_v1) and [handwriting recognition model](https://arxiv.org/abs/1910.00663). Label your own data using [SageMaker Ground Truth](https://aws.amazon.com/sagemaker/data-labeling/). | [GitHub »](https://github.com/awslabs/sagemaker-handwritten-text-recognition) | 
| Object detection for bird species | Identify birds species in a scene using a [SageMaker AI object detection model](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection.html). |  Find in Amazon SageMaker Studio Classic.  | 

## Extract and analyze data from documents
<a name="jumpstart-solutions-documents"></a>

JumpStart provides solutions for you to uncover valuable insights and connections in business-critical documents. Use cases include text classification, document summarization, handwriting recognition, relationship extraction, question and answering, and filling in missing values in tabular records.


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Privacy for sentiment classification  | [Anonymize text ](https://www.amazon.science/blog/preserving-privacy-in-analyses-of-textual-data) to better preserve user privacy in sentiment classification. |  [GitHub »](https://github.com/awslabs/sagemaker-privacy-for-nlp)  | 
| Document understanding | Document summarization, entity, and relationship extraction using the [transformers](https://huggingface.co/docs/transformers/index) library in PyTorch. |  [GitHub »](https://github.com/awslabs/sagemaker-document-understanding)  | 
| Handwriting recognition  | Recognize handwritten text in images by training an [object detection model](https://mxnet.apache.org/versions/1.0.0/api/python/gluon/model_zoo.html#mxnet.gluon.model_zoo.vision.resnet34_v1) and [handwriting recognition model](https://arxiv.org/abs/1910.00663). Label your own data using [SageMaker Ground Truth](https://aws.amazon.com/sagemaker/data-labeling/). | [GitHub »](https://github.com/awslabs/sagemaker-handwritten-text-recognition) | 
| Filling in missing values in tabular records  | Fill missing values in tabular records by training a [SageMaker Autopilot](https://aws.amazon.com/sagemaker/autopilot/) model. |  [GitHub »](https://github.com/awslabs/filling-in-missing-values-in-tabular-records)  | 

## Predictive maintenance
<a name="jumpstart-solutions-predictive-maintenance"></a>

Predictive maintenance aims to optimize the balance between corrective and preventative maintenance by facilitating the timely replacement of components. The following solutions use sensor data from industrial assets to predict machine failures, unplanned downtime, and repair costs.


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Predictive maintenance for vehicle fleets  | Predict vehicle fleet failures using vehicle sensor and maintenance information with a convolutional neural network model. |  [GitHub »](https://github.com/awslabs/aws-fleet-predictive-maintenance/)  | 
| Predictive maintenance for manufacturing  | Predict the remaining useful life for each sensor by training a [stacked Bidirectional LSTM neural network](https://arxiv.org/pdf/1801.02143.pdf) model using historical sensor readings. |  [GitHub »](https://github.com/awslabs/predictive-maintenance-using-machine-learning)  | 

## Churn prediction
<a name="jumpstart-solutions-churn-prediction"></a>

Customer churn, or rate of attrition, is a costly problem faced by a wide range of companies. In an effort to reduce churn, companies can identify customers that are likely to leave their service in order to focus their efforts on customer retention. Use a JumpStart churn prediction solution to analyze data sources such as user behavior and customer support chat logs to identify customers that are at a high risk of cancelling a subscription or service.


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Churn prediction with text  | Predict churn using numerical, categorical, and textual features with [BERT encoder](https://huggingface.co/) and [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). |  [GitHub »](https://github.com/awslabs/sagemaker-churn-prediction-text)  | 
| Churn prediction for mobile phone customers | Identify unhappy mobile phone customers using [SageMaker AI XGBoost](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html). |  Find in Amazon SageMaker Studio Classic.  | 

## Personalized recommendations
<a name="jumpstart-solutions-recommendations"></a>

You can use JumpStart solutions to analyze customer identity graphs or user sessions to better understand and predict customer behavior. Use the following solutions for personalized recommendations to model customer identity across multiple devices, to determine the likelihood of a customer making a purchase, or to create a custom movie recommender based on past customer behavior. 


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Entity resolution in identity graphs with deep graph library  | Perform cross-device entity linking for online advertising by training a [graph convolutional network](https://arxiv.org/pdf/1703.06103.pdf) with [deep graph library](https://www.dgl.ai/). |  [GitHub »](https://github.com/awslabs/sagemaker-graph-entity-resolution)  | 
| Purchase modeling | Predict whether a customer will make a purchase by training a [SageMaker AI XGBoost](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html) model. |  [GitHub »](https://github.com/awslabs/sagemaker-purchase-modelling)  | 
| Customized recommender system |  Train and deploy a custom recommender system that generates movie suggestions for a customer based on past behavior using Neural Collaborative Filtering in SageMaker AI.  |  Find in Amazon SageMaker Studio Classic.  | 

## Reinforcement learning
<a name="jumpstart-solutions-reinforcement-learning"></a>

Reinforcement learning (RL) is a type of learning that is based on interaction with the environment. This type of learning is used by an agent that must learn behavior through trial-and-error interactions with a dynamic environment in which the goal is to maximize the long-term rewards that the agent receives as a result of its actions. Rewards are maximized by trading off exploring actions that have uncertain rewards with exploiting actions that have known rewards.

RL is well-suited for solving large, complex problems, such as supply chain management, HVAC systems, industrial robotics, game artificial intelligence, dialog systems, and autonomous vehicles. 


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Reinforcement learning for Battlesnake AI competitions  | Provide a reinforcement learning workflow for training and inference with the [BattleSnake](https://play.battlesnake.com/) AI competitions. |  [GitHub »](https://github.com/awslabs/sagemaker-battlesnake-ai)  | 
| Distributed reinforcement learning for Procgen challenge  | Distributed reinforcement learning starter kit for [NeurIPS 2020 Procgen](https://www.aicrowd.com/challenges/neurips-2020-procgen-competition) Reinforcement learning challenge. | [GitHub »](https://github.com/aws-samples/sagemaker-rl-procgen-ray) | 

## Healthcare and life sciences
<a name="jumpstart-solutions-healthcare-life-sciences"></a>

Clinicians and researchers can use JumpStart solutions to analyze medical imagery, genomic information, and clinical health records. 


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Lung cancer survival prediction | Predict non-small cell lung cancer patient survival status with 3-dimensional lung computerized tomography (CT) scans, genomic data, and clinical health records using [SageMaker AI XGBoost](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html). |  [GitHub »](https://github.com/aws-samples/machine-learning-pipelines-for-multimodal-health-data/tree/sagemaker-soln-lcsp)  | 

## Financial pricing
<a name="jumpstart-solutions-financial-pricing"></a>

Many businesses dynamically adjust pricing on a regular basis in order to maximize their returns. Use the following JumpStart solutions for price optimization, dynamic pricing, option pricing, or portfolio optimization use cases. 


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Price optimization |  Estimate price elasticity using Double Machine Learning (ML) for causal inference and the [Prophet](https://facebook.github.io/prophet/) forecasting procedure. Use these estimates to optimize daily prices.  |  Find in Amazon SageMaker Studio Classic.  | 

## Causal inference
<a name="jumpstart-solutions-causal-inference"></a>

Researchers can use machine learning models such as Bayesian networks to represent causal dependencies and draw causal conclusions based on data. Use the following JumpStart solution to understand the causal relationship between Nitrogen-based fertilizer application and corn crop yields.


| Solution name  | Description  | Get started  | 
| --- | --- | --- | 
| Crop yield counterfactuals |  Generate a counterfactual analysis of corn response to nitrogen. This solution learns the crop phenology cycle in its entirety using multi-spectral satellite imagery and [ground-level observations](https://www.sciencedirect.com/science/article/pii/S2352340921010283#tbl0001).  |  Find in Amazon SageMaker Studio Classic.  | 

# Launch a Solution
<a name="jumpstart-solutions-launch"></a>

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Note**  
JumpStart Solutions are only available in Studio Classic.

First, choose a solution through the SageMaker JumpStart landing page in the Amazon SageMaker Studio Classic UI. For information on the onboarding steps to sign in to Amazon SageMaker Studio Classic, see [Onboard to Amazon SageMaker AI domain](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html). For details on getting to the SageMaker JumpStart landing page, see [Open and use JumpStart in Studio Classic](studio-jumpstart.md#jumpstart-open-use).

After you choose a solution, a solution's tab opens showing a description of the solution and a `Launch` button. To launch a solution, select `Launch` in the **Launch Solution** section. JumpStart then creates all of the resources needed to run the solution. This includes training and model hosting instances. 

## Advanced parameters
<a name="jumpstart-solutions-config"></a>

The solution that you choose may have advanced parameters that you can select. Choose **Advanced Parameters** to specify the AWS Identity and Access Management role for the solution. 

Solutions are able to launch resources across 9 AWS services that interact with each other. For the solution to work as expected, newly created components from one service must be able to act on newly created components from another service. We recommend that you use the default IAM role to ensure that all needed permissions are added. For more information about IAM roles, see [AWS Identity and Access Management for Amazon SageMaker AI](security-iam.md).

**Default IAM role**

If you select this option, the default IAM roles that are required by this solution are used. Each solution requires different resources. The following list describes the default roles that are used for the solutions based on the service needed. For a description of the permissions required for each service, see [AWS Managed Policies for SageMaker Projects and JumpStart](security-iam-awsmanpol-sc.md).
+ **API Gateway** – AmazonSageMakerServiceCatalogProductsApiGatewayRole 
+ **CloudFormation** – AmazonSageMakerServiceCatalogProductsCloudformationRole
+ **CodeBuild** – AmazonSageMakerServiceCatalogProductsCodeBuildRole 
+ **CodePipeline** – AmazonSageMakerServiceCatalogProductsCodePipelineRole
+ **Events** – AmazonSageMakerServiceCatalogProductsEventsRole
+ **Firehose** – AmazonSageMakerServiceCatalogProductsFirehoseRole
+ **Glue** – AmazonSageMakerServiceCatalogProductsGlueRole
+ **Lambda** – AmazonSageMakerServiceCatalogProductsLambdaRole
+ **SageMaker AI** – AmazonSageMakerServiceCatalogProductsExecutionRole 

If you are using a new SageMaker AI domain with JumpStart project templates enabled, these roles are automatically created in your account.

If you are using an existing SageMaker AI domain, these roles may not exist in your account. If this is the case, you will receive the following error when launching the solution. 

```
Unable to locate the updated roles required to launch this solution, a general role '/service-role/AmazonSageMakerServiceCatalogProductsUseRole' will be used. Please update your studio domain to generate these roles.
```

You can still launch a solution without the needed role, but the legacy default role `AmazonSageMakerServiceCatalogProductsUseRole` is used in place of the needed role. The legacy default role has trust relationships with all of the services that JumpStart solutions need to interact with. For the best security, we recommend that you update your domain to have the newly created default roles for each AWS service.

If you have already onboarded to a SageMaker AI domain, you can update your domain to generate the default roles using the following procedure.

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Choose ** Control Panel ** at the top left of the page.

1. From the **domain** page, choose the **Settings** icon (![\[Black square icon representing a placeholder or empty image.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/icons/Settings_squid.png)) to edit the domain settings.

1. On **General Settings** choose **Next**.

1. Under **SageMaker Projects and JumpStart**, select **Enable Amazon SageMaker project templates and Amazon SageMaker JumpStart for this account ** and **Enable Amazon SageMaker project templates and Amazon SageMaker JumpStart for Studio Classic users**, choose **Next**.

1. Select **Submit**.

You should be able to see the default roles listed in **Projects - Amazon SageMaker project templates enabled for this account** under the **Apps - Studio** tab.

**Find IAM role**

If you select this option, you must select an existing IAM role from the dropdown list for each of the required services. The selected role must have at least the minimum permissions required for the corresponding service. For a description of the permissions required for each service, see [AWS Managed Policies for SageMaker Projects and JumpStart](security-iam-awsmanpol-sc.md).

**Input IAM role**

If you select this option, you must manually enter the ARN for an existing IAM role. The selected role must have at least the minimum permissions required for the corresponding service. For a description of the permissions required for each service, see [AWS Managed Policies for SageMaker Projects and JumpStart](security-iam-awsmanpol-sc.md).

# Amazon SageMaker JumpStart Industry: Financial
<a name="studio-jumpstart-industry"></a>

Use SageMaker JumpStart Industry: Financial solutions, models, and example notebooks to learn about SageMaker AI features and capabilities through curated one-step solutions and example notebooks of industry-focused machine learning (ML) problems. The notebooks also walk through how to use the SageMaker JumpStart Industry Python SDK to enhance industry text data and fine-tune pretrained models.

**Topics**
+ [Amazon SageMaker JumpStart Industry Python SDK](#studio-jumpstart-industry-pysdk)
+ [Amazon SageMaker JumpStart Industry: Financial Solution](#studio-jumpstart-industry-solutions)
+ [Amazon SageMaker JumpStart Industry: Financial Models](#studio-jumpstart-industry-models)
+ [Amazon SageMaker JumpStart Industry: Financial Example Notebooks](#studio-jumpstart-industry-examples)
+ [Amazon SageMaker JumpStart Industry: Financial Blog Posts](#studio-jumpstart-industry-blogs)
+ [Amazon SageMaker JumpStart Industry: Financial Related Research](#studio-jumpstart-industry-research)
+ [Amazon SageMaker JumpStart Industry: Financial Additional Resources](#studio-jumpstart-industry-resources)

## Amazon SageMaker JumpStart Industry Python SDK
<a name="studio-jumpstart-industry-pysdk"></a>

SageMaker Runtime JumpStart provides processing tools for curating industry datasets and fine-tuning pretrained models through its client library called SageMaker JumpStart Industry Python SDK. For detailed API documentation of the SDK, and to learn more about processing and enhancing industry text datasets for improving the performance of state-of-the-art models on SageMaker JumpStart, see the [SageMaker JumpStart Industry Python SDK open source documentation](https://sagemaker-jumpstart-industry-pack.readthedocs.io).

## Amazon SageMaker JumpStart Industry: Financial Solution
<a name="studio-jumpstart-industry-solutions"></a>

SageMaker JumpStart Industry: Financial provides the following solution notebooks:
+ **Corporate Credit Rating Prediction**

This SageMaker JumpStart Industry: Financial solution provides a template for a text-enhanced corporate credit rating model. It shows how to take a model based on numeric features (in this case, Altman's famous 5 financial ratios) combined with texts from SEC filings to achieve an improvement in the prediction of credit ratings. In addition to the 5 Altman ratios, you can add more variables as needed or set custom variables. This solution notebook shows how SageMaker JumpStart Industry Python SDK helps process Natural Language Processing (NLP) scoring of texts from SEC filings. Furthermore, the solution demonstrates how to train a model using the enhanced dataset to achieve a best-in-class model, deploy the model to a SageMaker AI endpoint for production, and receive improved predictions in real time.
+ **Graph-Based Credit Scoring **

Credit ratings are traditionally generated using models that use financial statement data and market data, which is tabular only (numeric and categorical). This solution constructs a network of firms using [SEC filings](https://www.sec.gov/edgar/searchedgar/companysearch.html)and shows how to use the network of firm relationships with tabular data to generate accurate rating predictions. This solution demonstrates a methodology to use data on firm linkages to extend the traditionally tabular-based credit scoring models, which have been used by the ratings industry for decades, to the class of machine learning models on networks.

**Note**  
The solution notebooks are for demonstration purposes only. They should not be relied on as financial or investment advice.

You can find these financial services solutions through the SageMaker JumpStart page in Studio Classic.

**Important**  
As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the Studio Classic application. For information about using the updated Studio experience, see [Amazon SageMaker Studio](studio-updated.md).  
Studio Classic is still maintained for existing workloads but is no longer available for onboarding. You can only stop or delete existing Studio Classic applications and cannot create new ones. We recommend that you [migrate your workload to the new Studio experience](studio-updated-migrate.md).

**Note**  
The SageMaker JumpStart Industry: Financial solutions, model cards, and example notebooks are hosted and runnable only through SageMaker Studio Classic. Log in to the [SageMaker AI console](https://console.aws.amazon.com/sagemaker), and launch SageMaker Studio Classic. For more information about how to find the solution card, see the previous topic at [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html).

## Amazon SageMaker JumpStart Industry: Financial Models
<a name="studio-jumpstart-industry-models"></a>

SageMaker JumpStart Industry: Financial provides the following pretrained [Robustly Optimized BERT approach (RoBERTa)](https://arxiv.org/pdf/1907.11692.pdf) models:
+ **Financial Text Embedding (RoBERTa-SEC-Base) **
+ **RoBERTa-SEC-WIKI-Base **
+ **RoBERTa-SEC-Large **
+ **RoBERTa-SEC-WIKI-Large **

The RoBERTa-SEC-Base and RoBERTa-SEC-Large models are the text embedding models based on [GluonNLP's RoBERTa model](https://nlp.gluon.ai/api/model.html#gluonnlp.model.RoBERTaModel) and pretrained on S&P 500 SEC 10-K/10-Q reports of the decade of the 2010's (from 2010 to 2019). In addition to these, SageMaker AI JumpStart Industry: Financial provides two more RoBERTa variations, RoBERTa-SEC-WIKI-Base and RoBERTa-SEC-WIKI-Large, which are pretrained on the SEC filings and common texts of Wikipedia. 

You can find these models in SageMaker JumpStart by navigating to the **Text Models** node, choosing **Explore All Text Models**, and then filtering for the ML Task **Text Embedding**. You can access any corresponding notebooks after selecting the model of your choice. The paired notebooks will walk you through how the pretrained models can be fine-tuned for specific classification tasks on multimodal datasets, which are enhanced by the SageMaker JumpStart Industry Python SDK.

**Note**  
The model notebooks are for demonstration purposes only. They should not be relied on as financial or investment advice.

The following screenshot shows the pretrained model cards provided through the SageMaker AI JumpStart page on Studio Classic.

![\[The pretrained model cards provided through the SageMaker AI JumpStart page on Studio Classic.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/jumpstart/jumpstart-finance-models.png)


**Note**  
The SageMaker JumpStart Industry: Financial solutions, model cards, and example notebooks are hosted and runnable only through SageMaker Studio Classic. Log in to the [SageMaker AI console](https://console.aws.amazon.com/sagemaker), and launch SageMaker Studio Classic. For more information about how to find the model cards, see the previous topic at [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html).

## Amazon SageMaker JumpStart Industry: Financial Example Notebooks
<a name="studio-jumpstart-industry-examples"></a>

SageMaker JumpStart Industry: Financial provides the following example notebooks to demonstrate solutions to industry-focused ML problems:
+ **Financial TabText Data Construction** – This example introduces how to use the SageMaker JumpStart Industry Python SDK for processing the SEC filings, such as text summarization and scoring texts based on NLP score types and their corresponding word lists. To preview the content of this notebook, see [Simple Construction of a Multimodal Dataset from SEC Filings and NLP Scores](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/notebooks/finance/notebook1/SEC_Retrieval_Summarizer_Scoring.html).
+ **Multimodal ML on TabText Data** – This example shows how to merge different types of datasets into a single dataframe called TabText and perform multimodal ML. To preview the content of this notebook, see [Machine Learning on a TabText Dataframe – An Example Based on the Paycheck Protection Program](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/notebooks/finance/notebook2/PPP_TabText_ML.html).
+ **Multi-category ML on SEC filings data** – This example shows how to train an AutoGluon NLP model over the multimodal (TabText) datasets curated from SEC filings for a multiclass classification task. [Classify SEC 10K/Q Filings to Industry Codes Based on the MDNA Text Column](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/notebooks/finance/notebook3/SEC_MNIST_ML.html).

**Note**  
The example notebooks are for demonstrative purposes only. They should not be relied on as financial or investment advice.

**Note**  
The SageMaker JumpStart Industry: Financial solutions, model cards, and example notebooks are hosted and runnable only through SageMaker Studio Classic. Log in to the [SageMaker AI console](https://console.aws.amazon.com/sagemaker), and launch SageMaker Studio Classic. For more information about how to find the example notebooks, see the previous topic at [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html).

To preview the content of the example notebooks, see [Tutorials – Finance](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/notebooks/index.html) in the *SageMaker JumpStart Industry Python SDK documentation*.

## Amazon SageMaker JumpStart Industry: Financial Blog Posts
<a name="studio-jumpstart-industry-blogs"></a>

For thorough applications of using SageMaker JumpStart Industry: Financial solutions, models, examples, and the SDK, see the following blog posts:
+ [Use pre-trained financial language models for transfer learning in Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/use-pre-trained-financial-language-models-for-transfer-learning-in-amazon-sagemaker-jumpstart/)
+ [Use SEC text for ratings classification using multimodal ML in Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/use-sec-text-for-ratings-classification-using-multimodal-ml-in-amazon-sagemaker-jumpstart/)
+ [Create a dashboard with SEC text for financial NLP in Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/create-a-dashboard-with-sec-text-for-financial-nlp-in-amazon-sagemaker-jumpstart/)
+ [Build a corporate credit ratings classifier using graph machine learning in Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/build-a-corporate-credit-ratings-classifier-using-graph-machine-learning-in-amazon-sagemaker-jumpstart/)
+ [Domain-adaptation Fine-tuning of Foundation Models in Amazon SageMaker JumpStart on Financial data](https://aws.amazon.com/blogs/machine-learning/domain-adaptation-fine-tuning-of-foundation-models-in-amazon-sagemaker-jumpstart-on-financial-data/)

## Amazon SageMaker JumpStart Industry: Financial Related Research
<a name="studio-jumpstart-industry-research"></a>

For research related to SageMaker JumpStart Industry: Financial solutions, see the following papers:
+ [Context, Language Modeling, and Multimodal Data in Finance](https://www.pm-research.com/content/iijjfds/3/3/52)
+ [Multimodal Machine Learning for Credit Modeling](https://www.amazon.science/publications/multimodal-machine-learning-for-credit-modeling)
+ [On the Lack of Robust Interpretability of Neural Text Classifiers](https://www.amazon.science/publications/on-the-lack-of-robust-interpretability-of-neural-text-classifiers)
+ [FinLex: An Effective Use of Word Embeddings for Financial Lexicon Generation](https://www.sciencedirect.com/science/article/pii/S2405918821000131)

## Amazon SageMaker JumpStart Industry: Financial Additional Resources
<a name="studio-jumpstart-industry-resources"></a>

For additional documentation and tutorials, see the following resources:
+ [The SageMaker JumpStart Industry: Financial Python SDK](https://pypi.org/project/smjsindustry/)
+ [SageMaker JumpStart Industry: Financial Python SDK Tutorials](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/notebooks/index.html#)
+ [The SageMaker JumpStart Industry: Financial GitHub repository](https://github.com/aws/sagemaker-jumpstart-industry-pack/)
+ [Getting started with Amazon SageMaker AI - Machine Learning Tutorials](https://aws.amazon.com/sagemaker/getting-started/)