

# Algorithms and packages in the AWS Marketplace
<a name="sagemaker-marketplace"></a>

Amazon SageMaker AI integrates with AWS Marketplace, enabling developers to charge other SageMaker AI users for the use of their algorithms and model packages. AWS Marketplace is a curated digital catalog that makes it easy for customers to find, buy, deploy, and manage third-party software and services that customers need to build solutions and run their businesses. AWS Marketplace includes thousands of software listings in popular categories, such as security, networking, storage, machine learning, business intelligence, database, and DevOps. It simplifies software licensing and procurement with flexible pricing options and multiple deployment methods. 

For information, see [AWS Marketplace Documentation](https://docs.aws.amazon.com/marketplace/index.html#lang/en_us).

## Topics
<a name="sagemaker-marketplace-topics"></a>
+ [SageMaker AI Algorithms](#sagemaker-mkt-algorithm)
+ [SageMaker AI Model Packages](#sagemaker-mkt-model-package)
+ [Listings for your own algorithms and models with the AWS Marketplace](sagemaker-marketplace-sell.md)
+ [Find and Subscribe to Algorithms and Model Packages on AWS Marketplace](sagemaker-mkt-find-subscribe.md)
+ [Usage of Algorithm and Model Package Resources](sagemaker-mkt-buy.md)

## SageMaker AI Algorithms
<a name="sagemaker-mkt-algorithm"></a>

An algorithm enables you to perform end-to-end machine learning. It has two logical components: training and inference. Buyers can use the training component to create training jobs in SageMaker AI and build a machine learning model. SageMaker AI saves the model artifacts generated by the algorithm during training to an Amazon S3 bucket. For more information, see [Train a Model with Amazon SageMaker](how-it-works-training.md).

Buyers use the inference component with the model artifacts generated during a training job to create a deployable model in their SageMaker AI account. They can use the deployable model for real-time inference by using SageMaker AI hosting services. Or, they can get inferences for an entire dataset by running batch transform jobs. For more information, see [Model deployment options in Amazon SageMaker AI](how-it-works-deployment.md).

## SageMaker AI Model Packages
<a name="sagemaker-mkt-model-package"></a>

Buyers use a model package to build a deployable model in SageMaker AI. They can use the deployable model for real-time inference by using SageMaker AI hosting services. Or, they can get inferences for an entire dataset by running batch transform jobs. For more information, see [Model deployment options in Amazon SageMaker AI](how-it-works-deployment.md). As a seller, you can build your model artifacts by training in SageMaker AI, or you can use your own model artifacts from a model that you trained outside of SageMaker AI. You can charge buyers for inference.

**Topics**

# Custom algorithms and models with the AWS Marketplace
<a name="your-algorithms-marketplace"></a>

The following sections show how to create algorithm and model package resources that you can use locally and publish to the AWS Marketplace.

**Topics**
+ [Creation of Algorithm and Model Package Resources](sagemaker-mkt-create.md)
+ [Usage of Algorithm and Model Package Resources](sagemaker-mkt-buy.md)

# Creation of Algorithm and Model Package Resources
<a name="sagemaker-mkt-create"></a>

After your training and/or inference code is packaged in Docker containers, create algorithm and model package resources that you can use in your Amazon SageMaker AI account and, optionally, publish on AWS Marketplace.

**Topics**
+ [Create an Algorithm Resource](sagemaker-mkt-create-algo.md)
+ [Create a Model Package Resource](sagemaker-mkt-create-model-package.md)

# Create an Algorithm Resource
<a name="sagemaker-mkt-create-algo"></a>

You can create an algorithm resource to use with training jobs in Amazon SageMaker AI, and you can publish it on AWS Marketplace. The following sections explain how to do that using the AWS Management Console and the SageMaker API.

To create an algorithm resource, you specify the following information:
+ The Docker containers that contains the training and, optionally, inference code.
+ The configuration of the input data that your algorithm expects for training.
+ The hyperparameters that your algorithm supports.
+ Metrics that your algorithm sends to Amazon CloudWatch during training jobs.
+ The instance types that your algorithm supports for training and inference, and whether it supports distributed training across multiple instances.
+ Validation profiles, which are training jobs that SageMaker AI uses to test your algorithm's training code and batch transform jobs that SageMaker AI runs to test your algorithm's inference code.

  To ensure that buyers and sellers can be confident that products work in SageMaker AI, we require that you validate your algorithms before listing them on AWS Marketplace. You can list products in the AWS Marketplace only if validation succeeds. To validate your algorithms, SageMaker AI uses your validation profile and sample data to run the following validations tasks:

  1. Create a training job in your account to verify that your training image works with SageMaker AI.

  1. If you included inference code in your algorithm, create a model in your account using the algorithm's inference image and the model artifacts produced by the training job.

  1. If you included inference code in your algorithm, create a transform job in your account using the model to verify that your inference image works with SageMaker AI.

  When you list your product on AWS Marketplace, the inputs and outputs of this validation process persist as part of your product and are made available to your buyers. This helps buyers understand and evaluate the product before they buy it. For example, buyers can inspect the input data that you used, the outputs generated, and the logs and metrics emitted by your code. The more comprehensive your validation specification, the easier it is for customers to evaluate your product.
**Note**  
In your validation profile, provide only data that you want to expose publicly.

  Validation can take up to a few hours. To see the status of the jobs in your account, in the SageMaker AI console, see the **Training jobs** and **Transform jobs** pages. If validation fails, you can access the scan and validation reports from the SageMaker AI console. If any issues are found, you will have to create the algorithm again.
**Note**  
To publish your algorithm on AWS Marketplace, at least one validation profile is required.

You can create an algorithm by using either the SageMaker AI console or the SageMaker AI API.

**Topics**
+ [Create an Algorithm Resource (Console)](#sagemaker-mkt-create-algo-console)
+ [Create an Algorithm Resource (API)](#sagemaker-mkt-create-algo-api)

## Create an Algorithm Resource (Console)
<a name="sagemaker-mkt-create-algo-console"></a>

**To create an algorithm resource (console)**

1. Open the SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left menu, choose **Training**.

1. From the dropdown menu, choose **Algorithms**, then choose **Create algorithm**.

1. On the **Training specifications** page, provide the following information:

   1. For **Algorithm name**, type a name for your algorithm. The algorithm name must be unique in your account and in the AWS region. The name must have 1 to 64 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).

   1. Type a description for your algorithm. This description appears in the SageMaker AI console and in the AWS Marketplace.

   1. For **Training image, type the path in Amazon ECR where your training container is stored.**

   1. For **Support distributed training**, Choose **Yes** if your algorithm supports training on multiple instances. Otherwise, choose **No**.

   1. For **Support instance types for training**, choose the instance types that your algorithm supports.

   1. For **Channel specification**, specify up to 8 channels of input data for your algorithm. For example, you might specify 3 input channels named `train`, `validation`, and `test`. For each channel, specify the following information:

      1. For **Channel name**, type a name for the channel. The name must have 1 to 64 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).

      1. To require the channel for your algorithm, choose **Channel required**.

      1. Type a description for the channel.

      1. For **Supported input modes**, choose **Pipe mode** if your algorithm supports streaming the input data, and **File mode** if your algorithm supports downloading the input data as a file. You can choose both.

      1. For **Supported content types**, type the MIME type that your algorithm expects for input data.

      1. For **Supported compression type**, choose **Gzip** if your algorithm supports Gzip compression. Otherwise, choose **None**.

      1. Choose **Add channel** to add another data input channel, or choose **Next** if you are done adding channels.

1. On the **Tuning specifications** page, provide the following information:

   1. For **Hyperparameter specification**, specify the hyperparameters that your algorithm supports by editing the JSON object. For each hyperparameter that your algorithm supports, construct a JSON block similar to the following:

      ```
      {
      "DefaultValue": "5",
      "Description": "The first hyperparameter",
      "IsRequired": true,
      "IsTunable": false,
      "Name": "intRange",
      "Range": {
      "IntegerParameterRangeSpecification": {
      "MaxValue": "10",
      "MinValue": "1"
      },
      "Type": "Integer"
      }
      ```

      In the JSON, supply the following:

      1. For `DefaultValue`, specify a default value for the hyperparameter, if there is one.

      1. For `Description`, specify a description for the hyperparameter.

      1. For `IsRequired`, specify whether the hyperparameter is required.

      1. For `IsTunable`, specify `true` if this hyperparameter can be tuned when a user runs a hyperparameter tuning job that uses this algorithm. For information, see [Automatic model tuning with SageMaker AI](automatic-model-tuning.md).

      1. For `Name`, specify a name for the hyperparameter.

      1. For `Range`, specify one of the following:
         + `IntegerParameterRangeSpecification` - the values of the hyperparameter are integers. Specify minimum and maximum values for the hyperparameter.
         + 
         + `ContinuousParameterRangeSpecification` - the values of the hyperparameter are floating-point values. Specify minimum and maximum values for the hyperparameter.
         + `CategoricalParameterRangeSpecification` - the values of the hyperparameter are categorical values. Specify a list of all of the possible values.

      1. For `Type`, specify `Integer`, `Continuous`, or `Categorical`. The value must correspond to the type of `Range` that you specified.

   1. For **Metric definitions**, specify any training metrics that you want your algorithm to emit. SageMaker AI uses the regular expression that you specify to find the metrics by parsing the logs from your training container during training. Users can view these metrics when they run training jobs with your algorithm, and they can monitor and plot the metrics in Amazon CloudWatch. For information, see [Amazon CloudWatch Metrics for Monitoring and Analyzing Training Jobs](training-metrics.md). For each metric, provide the following information:

      1. For **Metric name**, type a name for the metric.

      1. For `Regex`, type the regular expression that SageMaker AI uses to parse training logs so that it can find the metric value.

      1. For **Objective metric support** choose **Yes** if this metric can be used as the objective metric for a hyperparameter tuning job. For information, see [Automatic model tuning with SageMaker AI](automatic-model-tuning.md).

      1. Choose **Add metric** to add another metric, or choose **Next** if you are done adding metrics.

1. On the **Inference specifications** page, provide the following information if your algorithm supports inference:

   1. For **Location of inference image**, type the path in Amazon ECR where your inference container is stored.

   1. For **Container DNS host name**, type the name of a DNS host for your image.

   1. For **Supported instance types for real-time inference**, choose the instance types that your algorithm supports for models deployed as hosted endpoints in SageMaker AI. For information, see [Deploy models for inference](deploy-model.md).

   1. For **Supported instance types for batch transform jobs**, choose the instance types that your algorithm supports for batch transform jobs. For information, see [Batch transform for inference with Amazon SageMaker AI](batch-transform.md).

   1. For **Supported content types**, type the type of input data that your algorithm expects for inference requests.

   1. For **Supported response MIME types**, type the MIME types that your algorithm supports for inference responses.

   1. Choose **Next**.

1. On the **Validation specifications** page, provide the following information:

   1. For **Publish this algorithm on AWS Marketplace**, choose **Yes** to publish the algorithm on AWS Marketplace.

   1. For **Validate this resource**, choose **Yes** if you want SageMaker AI to run training jobs and/or batch transform jobs that you specify to test the training and/or inference code of your algorithm.
**Note**  
To publish your algorithm on AWS Marketplace, your algorithm must be validated.

   1. For **IAM role**, choose an IAM role that has the required permissions to run training jobs and batch transform jobs in SageMaker AI, or choose **Create a new role** to allow SageMaker AI to create a role that has the `AmazonSageMakerFullAccess` managed policy attached. For information, see [How to use SageMaker AI execution roles](sagemaker-roles.md).

   1. For **Validation profile**, specify the following:
      + A name for the validation profile.
      + A **Training job definition**. This is a JSON block that describes a training job. This is in the same format as the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TrainingJobDefinition.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TrainingJobDefinition.html) input parameter of the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html) API.
      + A **Transform job definition**. This is a JSON block that describes a batch transform job. This is in the same format as the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TransformJobDefinition.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TransformJobDefinition.html) input parameter of the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html) API.

   1. Choose **Create algorithm**.

## Create an Algorithm Resource (API)
<a name="sagemaker-mkt-create-algo-api"></a>

To create an algorithm resource by using the SageMaker API, call the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html) API. 

# Create a Model Package Resource
<a name="sagemaker-mkt-create-model-package"></a>

To create a model package resource that you can use to create deployable models in Amazon SageMaker AI and publish on AWS Marketplace specify the following information:
+ The Docker container that contains the inference code, or the algorithm resource that was used to train the model.
+ The location of the model artifacts. Model artifacts can either be packaged in the same Docker container as the inference code or stored in Amazon S3.
+ The instance types that your model package supports for both real-time inference and batch transform jobs.
+ Validation profiles, which are batch transform jobs that SageMaker AI runs to test your model package's inference code.

  Before listing model packages on AWS Marketplace, you must validate them. This ensures that buyers and sellers can be confident that products work in Amazon SageMaker AI. You can list products on AWS Marketplace only if validation succeeds. 

  The validation procedure uses your validation profile and sample data to run the following validations tasks:

  1. Create a model in your account using the model package's inference image and the optional model artifacts that are stored in Amazon S3.
**Note**  
A model package is specific to the region in which you create it. The S3 bucket where the model artifacts are stored must be in the same region where your created the model package.

  1. Create a transform job in your account using the model to verify that your inference image works with SageMaker AI.

  1. Create a validation profile.
**Note**  
In your validation profile, provide only data that you want to expose publicly.

  Validation can take up to a few hours. To see the status of the jobs in your account, in the SageMaker AI console, see the **Transform jobs** pages. If validation fails, you can access the scan and validation reports from the SageMaker AI console. After fixing issues, recreate the algorithm. When the status of the algorithm is `COMPLETED`, find it in the SageMaker AI console and start the listing process
**Note**  
To publish your model package on AWS Marketplace, at least one validation profile is required.

You can create an model package either by using the SageMaker AI console or by using the SageMaker API.

**Topics**
+ [Create a Model Package Resource (Console)](#sagemaker-mkt-create-model-pkg-console)
+ [Create a Model Package Resource (API)](#sagemaker-mkt-create-model-pkg-api)

## Create a Model Package Resource (Console)
<a name="sagemaker-mkt-create-model-pkg-console"></a>

**To create a model package in the SageMaker AI console:**

1. Open the SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. From the left menu, choose **Inference**.

1. Choose **Marketplace model packages**, then choose **Create marketplace model package**.

1. On the **Inference specifications** page, provide the following information:

   1. For **Model package name**, type a name for your model package. The model package name must be unique in your account and in the AWS region. The name must have 1 to 64 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).

   1. Type a description for your model package. This description appears in the SageMaker AI console and in the AWS Marketplace.

   1. For **Inference specification options**, choose **Provide the location of the inference image and model artifacts** to create a model package by using an inference container and model artifacts. Choose **Provide the algorithm used for training and its model artifacts** to create a model package from an algorithm resource that you created or subscribe to from AWS Marketplace.

   1. If you chose **Provide the location of the inference image and model artifacts** for **Inference specification options**, provide the following information for **Container definition** and **Supported resources**:

      1. For **Location of inference image**, type the path to the image that contains your inference code. The image must be stored as a Docker container in Amazon ECR.

      1. For **Location of model data artifacts**, type the location in S3 where your model artifacts are stored.

      1. For **Container DNS host name **, type the name of the DNS host to use for your container.

      1. For **Supported instance types for real-time inference**, choose the instance types that your model package supports for real-time inference from SageMaker AI hosted endpoints.

      1. For **Supported instance types for batch transform jobs**, choose the instance types that your model package supports for batch transform jobs.

      1. **Supported content types**, type the content types that your model package expects for inference requests.

      1. For **Supported response MIME types**, type the MIME types that your model package uses to provide inferences.

   1. If you chose **Provide the algorithm used for training and its model artifacts** for **Inference specification options**, provide the following information:

      1. For **Algorithm ARN**, type the Amazon Resource Name (ARN) of the algorithm resource to use to create the model package.

      1. For **Location of model data artifacts**, type the location in S3 where your model artifacts are stored.

   1. Choose **Next**.

1. On the **Validation and scanning** page, provide the following information:

   1. For **Publish this model package on AWS Marketplace**, choose **Yes** to publish the model package on AWS Marketplace.

   1. For **Validate this resource**, choose **Yes** if you want SageMaker AI to run batch transform jobs that you specify to test the inference code of your model package.
**Note**  
To publish your model package on AWS Marketplace, your model package must be validated.

   1. For **IAM role**, choose an IAM role that has the required permissions to run batch transform jobs in SageMaker AI, or choose **Create a new role** to allow SageMaker AI to create a role that has the `AmazonSageMakerFullAccess` managed policy attached. For information, see [How to use SageMaker AI execution roles](sagemaker-roles.md).

   1. For **Validation profile**, specify the following:
      + A name for the validation profile.
      + A **Transform job definition**. This is a JSON block that describes a batch transform job. This is in the same format as the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TransformJobDefinition.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TransformJobDefinition.html) input parameter of the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateAlgorithm.html) API.

1. Choose **Create marketplace model package**.

## Create a Model Package Resource (API)
<a name="sagemaker-mkt-create-model-pkg-api"></a>

To create a model package by using the SageMaker API, call the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModelPackage.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModelPackage.html) API. 

# Usage of Algorithm and Model Package Resources
<a name="sagemaker-mkt-buy"></a>

You can create algorithms and model packages as resources in your Amazon SageMaker AI account, and you can find and subscribe to algorithms and model packages on AWS Marketplace.

Use algorithms to:
+ Run training jobs. For information, see [Use an Algorithm to Run a Training Job](sagemaker-mkt-algo-train.md).
+ Run hyperparameter tuning jobs. For information, see [Use an Algorithm to Run a Hyperparameter Tuning Job](sagemaker-mkt-algo-tune.md).
+ Create model packages. After you use an algorithm resource to run a training job or a hyperparameter tuning job, you can use the model artifacts that these jobs output along with the algorithm to create a model package. For information, see [Create a Model Package Resource](sagemaker-mkt-create-model-package.md).
**Note**  
If you subscribe to an algorithm on AWS Marketplace, you must create a model package before you can use it to get inferences by creating hosted endpoint or running a batch transform job.

![\[Market buyer workflow.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/mkt-buyer-workflow.png)


Use model packages to:
+ Create models that you can use to get real-time inference or run batch transform jobs. For information, see [Use a Model Package to Create a Model](sagemaker-mkt-model-pkg-model.md).
+ Create hosted endpoints to get real-time inference. For information, see [Deploy the Model to SageMaker AI Hosting Services](ex1-model-deployment.md#ex1-deploy-model).
+ Create batch transform jobs. For information, see [(Optional) Make Prediction with Batch Transform](ex1-model-deployment.md#ex1-batch-transform).

**Topics**
+ [Use an Algorithm to Run a Training Job](sagemaker-mkt-algo-train.md)
+ [Use an Algorithm to Run a Hyperparameter Tuning Job](sagemaker-mkt-algo-tune.md)
+ [Use a Model Package to Create a Model](sagemaker-mkt-model-pkg-model.md)

# Use an Algorithm to Run a Training Job
<a name="sagemaker-mkt-algo-train"></a>

You can create use an algorithm resource to create a training job by using the Amazon SageMaker AI console, the low-level Amazon SageMaker API, or the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable).

**Note**  
Your execution role must have `sagemaker:DescribeAlgorithm` permission for the algorithm resource that you specify. For more information about execution role permissions, see [CreateTrainingJob API: Execution Role Permissions](sagemaker-roles.md#sagemaker-roles-createtrainingjob-perms).

**Topics**
+ [Use an Algorithm to Run a Training Job (Console)](#sagemaker-mkt-algo-train-console)
+ [Use an Algorithm to Run a Training Job (API)](#sagemaker-mkt-algo-train-api)
+ [Use an Algorithm to Run a Training Job ([Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable))](#sagemaker-mkt-algo-train-sdk)

## Use an Algorithm to Run a Training Job (Console)
<a name="sagemaker-mkt-algo-train-console"></a>

**To use an algorithm to run a training job (console)**

1. Open the SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Choose **Algorithms**.

1. Choose an algorithm that you created from the list on the **My algorithms** tab or choose an algorithm that you subscribed to on the **AWS Marketplace subscriptions** tab.

1. Choose **Create training job**.

   The algorithm you chose will automatically be selected.

1. On the **Create training job** page, provide the following information:

   1. For **Job name**, type a name for the training job.

   1. For **IAM role**, choose an IAM role that has the required permissions to run training jobs in SageMaker AI, or choose **Create a new role** to allow SageMaker AI to create a role that has the `AmazonSageMakerFullAccess` managed policy attached. For information, see [How to use SageMaker AI execution roles](sagemaker-roles.md).

   1. For **Resource configuration**, provide the following information:

      1. For **Instance type**, choose the instance type to use for training.

      1. For **Instance count**, type the number of ML instances to use for the training job.

      1. For **Additional volume per instance (GB)**, type the size of the ML storage volume that you want to provision. ML storage volumes store model artifacts and incremental states.

      1. For **Encryption key**, if you want Amazon SageMaker AI to use an AWS Key Management Service key to encrypt data in the ML storage volume attached to the training instance, specify the key.

      1. For **Stopping condition**, specify the maximum amount of time in seconds, minutes, hours, or days, that you want the training job to run.

   1. For **VPC**, choose a Amazon VPC that you want to allow your training container to access. For more information, see [Give SageMaker AI Training Jobs Access to Resources in Your Amazon VPC](train-vpc.md).

   1. For **Hyperparameters**, specify the values of the hyperparameters to use for the training job.

   1. For **Input data configuration**, specify the following values for each channel of input data to use for the training job. You can see what channels the algorithm you're using for training support, and the content type, supported compression type, and supported input modes for each channel, under **Channel specification** section of the **Algorithm summary** page for the algorithm.

      1. For **Channel name**, type the name of the input channel.

      1. For **Content type**, type the content type of the data that the algorithm expects for the channel.

      1. For **Compression type**, choose the data compression type to use, if any.

      1. For **Record wrapper**, choose `RecordIO` if the algorithm expects data in the `RecordIO` format.

      1. For **S3 data type**, **S3 data distribution type**, and **S3 location**, specify the appropriate values. For information about what these values mean, see [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_S3DataSource.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_S3DataSource.html).

      1. For **Input mode**, choose **File** to download the data from to the provisioned ML storage volume, and mount the directory to a Docker volume. Choose **Pipe**To stream data directly from Amazon S3 to the container.

      1. To add another input channel, choose **Add channel**. If you are finished adding input channels, choose **Done**.

   1. For **Output** location, specify the following values:

      1. For **S3 output path**, choose the S3 location where the training job stores output, such as model artifacts.
**Note**  
You use the model artifacts stored at this location to create a model or model package from this training job.

      1. For **Encryption key**, if you want SageMaker AI to use a AWS KMS key to encrypt output data at rest in the S3 location.

   1. For **Tags**, specify one or more tags to manage the training job. Each tag consists of a key and an optional value. Tag keys must be unique per resource.

   1. Choose **Create training job** to run the training job.

## Use an Algorithm to Run a Training Job (API)
<a name="sagemaker-mkt-algo-train-api"></a>

To use an algorithm to run a training job by using the SageMaker API, specify either the name or the Amazon Resource Name (ARN) as the `AlgorithmName` field of the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AlgorithmSpecification.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AlgorithmSpecification.html) object that you pass to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrainingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrainingJob.html). For information about training models in SageMaker AI, see [Train a Model with Amazon SageMaker](how-it-works-training.md).

## Use an Algorithm to Run a Training Job ([Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable))
<a name="sagemaker-mkt-algo-train-sdk"></a>

Use an algorithm that you created or subscribed to on AWS Marketplace to create a training job, create an `AlgorithmEstimator` object and specify either the Amazon Resource Name (ARN) or the name of the algorithm as the value of the `algorithm_arn` argument. Then call the `fit` method of the estimator. For example:

```
from sagemaker import AlgorithmEstimator
data_path = os.path.join(DATA_DIR, 'marketplace', 'training')

algo = AlgorithmEstimator(
algorithm_arn='arn:aws:sagemaker:us-east-2:012345678901:algorithm/my-algorithm',
        role='SageMakerRole',
        instance_count=1,
        instance_type='ml.c4.xlarge',
        sagemaker_session=sagemaker_session,
        base_job_name='test-marketplace')

train_input = algo.sagemaker_session.upload_data(
path=data_path, key_prefix='integ-test-data/marketplace/train')

algo.fit({'training': train_input})
```

# Use an Algorithm to Run a Hyperparameter Tuning Job
<a name="sagemaker-mkt-algo-tune"></a>

The following section explains how to use an algorithm resource to run a hyperparameter tuning job in Amazon SageMaker AI. A hyperparameter tuning job finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. For more information, see [Automatic model tuning with SageMaker AI](automatic-model-tuning.md).

You can create use an algorithm resource to create a hyperparameter tuning job by using the Amazon SageMaker AI console, the low-level Amazon SageMaker API, or the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable).

**Topics**
+ [Use an Algorithm to Run a Hyperparameter Tuning Job (Console)](#sagemaker-mkt-algo-tune-console)
+ [Use an Algorithm to Run a Hyperparameter Tuning Job (API)](#sagemaker-mkt-algo-tune-api)
+ [Use an Algorithm to Run a Hyperparameter Tuning Job ([Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable))](#sagemaker-mkt-algo-tune-sdk)

## Use an Algorithm to Run a Hyperparameter Tuning Job (Console)
<a name="sagemaker-mkt-algo-tune-console"></a>

**To use an algorithm to run a hyperparameter tuning job (console)**

1. Open the SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Choose **Algorithms**.

1. Choose an algorithm that you created from the list on the **My algorithms** tab or choose an algorithm that you subscribed to on the **AWS Marketplace subscriptions** tab.

1. Choose **Create hyperparameter tuning job**.

   The algorithm you chose will automatically be selected.

1. On the **Create hyperparameter tuning job** page, provide the following information:

   1. For **Warm start**, choose **Enable warm start** to use the information from previous hyperparameter tuning jobs as a starting point for this hyperparameter tuning job. For more information, see [Run a Warm Start Hyperparameter Tuning Job](automatic-model-tuning-warm-start.md).

      1. Choose **Identical data and algorithm** if your input data is the same as the input data for the parent jobs of this hyperparameter tuning job, or choose **Transfer learning** to use additional or different input data for this hyperparameter tuning job.

      1. For **Parent hyperparameter tuning job(s)**, choose up to 5 hyperparameter tuning jobs to use as parents to this hyperparameter tuning job.

   1. For **Hyperparameter tuning job name**, type a name for the tuning job.

   1. For **IAM role**, choose an IAM role that has the required permissions to run hyperparameter tuning jobs in SageMaker AI, or choose **Create a new role** to allow SageMaker AI to create a role that has the `AmazonSageMakerFullAccess` managed policy attached. For information, see [How to use SageMaker AI execution roles](sagemaker-roles.md).

   1. For **VPC**, choose a Amazon VPC that you want to allow the training jobs that the tuning job launches to access. For more information, see [Give SageMaker AI Training Jobs Access to Resources in Your Amazon VPC](train-vpc.md).

   1. Choose **Next**.

   1. For **Objective metric**, choose the metric that the hyperparameter tuning job uses to determine the best combination of hyperparameters, and choose whether to minimize or maximize this metric. For more information, see [View the Best Training Job](automatic-model-tuning-ex-tuning-job.md#automatic-model-tuning-best-training-job).

   1. For **Hyperparameter configuration**, choose ranges for the tunable hyperparameters that you want the tuning job to search, and set static values for hyperparameters that you want to remain constant in all training jobs that the hyperparameter tuning job launches. For more information, see [Define Hyperparameter Ranges](automatic-model-tuning-define-ranges.md).

   1. Choose **Next**.

   1. For **Input data configuration**, specify the following values for each channel of input data to use for the hyperparameter tuning job. You can see what channels the algorithm you're using for hyperparameter tuning supports, and the content type, supported compression type, and supported input modes for each channel, under **Channel specification** section of the **Algorithm summary** page for the algorithm.

      1. For **Channel name**, type the name of the input channel.

      1. For **Content type**, type the content type of the data that the algorithm expects for the channel.

      1. For **Compression type**, choose the data compression type to use, if any.

      1. For **Record wrapper**, choose `RecordIO` if the algorithm expects data in the `RecordIO` format.

      1. For **S3 data type**, **S3 data distribution type**, and **S3 location**, specify the appropriate values. For information about what these values mean, see [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_S3DataSource.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_S3DataSource.html).

      1. For **Input mode**, choose **File** to download the data from to the provisioned ML storage volume, and mount the directory to a Docker volume. Choose **Pipe**To stream data directly from Amazon S3 to the container.

      1. To add another input channel, choose **Add channel**. If you are finished adding input channels, choose **Done**.

   1. For **Output** location, specify the following values:

      1. For **S3 output path**, choose the S3 location where the training jobs that this hyperparameter tuning job launches store output, such as model artifacts.
**Note**  
You use the model artifacts stored at this location to create a model or model package from this hyperparameter tuning job.

      1. For **Encryption key**, if you want SageMaker AI to use a AWS KMS key to encrypt output data at rest in the S3 location.

   1. For **Resource configuration**, provide the following information:

      1. For **Instance type**, choose the instance type to use for each training job that the hyperparameter tuning job launches.

      1. For **Instance count**, type the number of ML instances to use for each training job that the hyperparameter tuning job launches.

      1. For **Additional volume per instance (GB)**, type the size of the ML storage volume that you want to provision each training job that the hyperparameter tuning job launches. ML storage volumes store model artifacts and incremental states.

      1. For **Encryption key**, if you want Amazon SageMaker AI to use an AWS Key Management Service key to encrypt data in the ML storage volume attached to the training instances, specify the key.

   1. For **Resource limits**, provide the following information:

      1. For **Maximum training jobs**, specify the maximum number of training jobs that you want the hyperparameter tuning job to launch. A hyperparameter tuning job can launch a maximum of 500 training jobs.

      1. For **Maximum parallel training jobs**, specify the maximum number of concurrent training jobs that the hyperparameter tuning job can launch. A hyperparameter tuning job can launch a maximum of 10 concurrent training jobs.

      1. For **Stopping condition**, specify the maximum amount of time in seconds, minutes, hours, or days, that you want each training job that the hyperparameter tuning job launches to run.

   1. For **Tags**, specify one or more tags to manage the hyperparameter tuning job. Each tag consists of a key and an optional value. Tag keys must be unique per resource.

   1. Choose **Create jobs** to run the hyperparameter tuning job.

## Use an Algorithm to Run a Hyperparameter Tuning Job (API)
<a name="sagemaker-mkt-algo-tune-api"></a>

To use an algorithm to run a hyperparameter tuning job by using the SageMaker API, specify either the name or the Amazon Resource Name (ARN) of the algorithm as the `AlgorithmName` field of the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AlgorithmSpecification.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AlgorithmSpecification.html) object that you pass to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html). For information about hyperparameter tuning in SageMaker AI, see [Automatic model tuning with SageMaker AI](automatic-model-tuning.md).

## Use an Algorithm to Run a Hyperparameter Tuning Job ([Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable))
<a name="sagemaker-mkt-algo-tune-sdk"></a>

Use an algorithm that you created or subscribed to on AWS Marketplace to create a hyperparameter tuning job, create an `AlgorithmEstimator` object and specify either the Amazon Resource Name (ARN) or the name of the algorithm as the value of the `algorithm_arn` argument. Then initialize a `HyperparameterTuner` object with the `AlgorithmEstimator` you created as the value of the `estimator` argument. Finally, call the `fit` method of the `AlgorithmEstimator`. For example:

```
from sagemaker import AlgorithmEstimator
from sagemaker.tuner import HyperparameterTuner

data_path = os.path.join(DATA_DIR, 'marketplace', 'training')

algo = AlgorithmEstimator(
            algorithm_arn='arn:aws:sagemaker:us-east-2:764419575721:algorithm/scikit-decision-trees-1542410022',
            role='SageMakerRole',
            instance_count=1,
            instance_type='ml.c4.xlarge',
            sagemaker_session=sagemaker_session,
            base_job_name='test-marketplace')

train_input = algo.sagemaker_session.upload_data(
    path=data_path, key_prefix='integ-test-data/marketplace/train')

algo.set_hyperparameters(max_leaf_nodes=10)
tuner = HyperparameterTuner(estimator=algo, base_tuning_job_name='some-name',
                                objective_metric_name='validation:accuracy',
                                hyperparameter_ranges=hyperparameter_ranges,
                                max_jobs=2, max_parallel_jobs=2)

tuner.fit({'training': train_input}, include_cls_metadata=False)
tuner.wait()
```

# Use a Model Package to Create a Model
<a name="sagemaker-mkt-model-pkg-model"></a>

Use a model package to create a deployable model that you can use to get real-time inferences by creating a hosted endpoint or to run batch transform jobs. You can create a deployable model from a model package by using the Amazon SageMaker AI console, the low-level SageMaker API), or the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable).

**Topics**
+ [Use a Model Package to Create a Model (Console)](#sagemaker-mkt-model-pkg-model-console)
+ [Use a Model Package to Create a Model (API)](#sagemaker-mkt-model-pkg-model-api)
+ [Use a Model Package to Create a Model ([Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable))](#sagemaker-mkt-model-pkg-model-sdk)

## Use a Model Package to Create a Model (Console)
<a name="sagemaker-mkt-model-pkg-model-console"></a>

**To create a deployable model from a model package (console)**

1. Open the SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Choose **Model packages**.

1. Choose a model package that you created from the list on the **My model packages** tab or choose a model package that you subscribed to on the **AWS Marketplace subscriptions** tab.

1. Choose **Create model**.

1. For **Model name**, type a name for the model.

1. For **IAM role**, choose an IAM role that has the required permissions to call other services on your behalf, or choose **Create a new role** to allow SageMaker AI to create a role that has the `AmazonSageMakerFullAccess` managed policy attached. For information, see [How to use SageMaker AI execution roles](sagemaker-roles.md).

1. For **VPC**, choose a Amazon VPC that you want to allow the model to access. For more information, see [Give SageMaker AI Hosted Endpoints Access to Resources in Your Amazon VPC](host-vpc.md).

1. Leave the default values for **Container input options** and **Choose model package**.

1. For environment variables, provide the names and values of environment variables you want to pass to the model container.

1. For **Tags**, specify one or more tags to manage the model. Each tag consists of a key and an optional value. Tag keys must be unique per resource.

1. Choose **Create model**.

After you create a deployable model, you can use it to set up an endpoint for real-time inference or create a batch transform job to get inferences on entire datasets. For information about hosting endpoints in SageMaker AI, see [Deploy Models for Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html).

## Use a Model Package to Create a Model (API)
<a name="sagemaker-mkt-model-pkg-model-api"></a>

To use a model package to create a deployable model by using the SageMaker API, specify the name or the Amazon Resource Name (ARN) of the model package as the `ModelPackageName` field of the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ContainerDefinition.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ContainerDefinition.html) object that you pass to the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html) API.

After you create a deployable model, you can use it to set up an endpoint for real-time inference or create a batch transform job to get inferences on entire datasets. For information about hosted endpoints in SageMaker AI, see [Deploy Models for Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html).

## Use a Model Package to Create a Model ([Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable))
<a name="sagemaker-mkt-model-pkg-model-sdk"></a>

To use a model package to create a deployable model by using the SageMaker AI Python SDK, initialize a `ModelPackage` object, and pass the Amazon Resource Name (ARN) of the model package as the `model_package_arn` argument. For example:

```
from sagemaker import ModelPackage
model = ModelPackage(role='SageMakerRole',
         model_package_arn='training-job-scikit-decision-trees-1542660466-6f92',
         sagemaker_session=sagemaker_session)
```

After you create a deployable model, you can use it to set up an endpoint for real-time inference or create a batch transform job to get inferences on entire datasets. For information about hosting endpoints in SageMaker AI, see [Deploy Models for Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html).

# Listings for your own algorithms and models with the AWS Marketplace
<a name="sagemaker-marketplace-sell"></a>

Selling Amazon SageMaker AI algorithms and model packages is a three-step process:

1. Develop your algorithm or model, and package it in a Docker container. For information, see [Develop Algorithms and Models in Amazon SageMaker AI](sagemaker-marketplace-develop.md).

1. Create an algorithm or model package resource in SageMaker AI. For information, see [Creation of Algorithm and Model Package Resources](sagemaker-mkt-create.md).

1. Register as a seller on AWS Marketplace and list your algorithm or model package on AWS Marketplace. For information about registering as a seller, see [Getting Started as a Seller](https://docs.aws.amazon.com/marketplace/latest/userguide/user-guide-for-sellers.html) in the *User Guide for AWS Marketplace Providers*. For information about listing and monetizing your algorithms and model packages, see [Listing Algorithms and Model Packages in AWS Marketplace for Machine Learning](https://docs.aws.amazon.com/marketplace/latest/userguide/listing-algorithms-and-model-packages-in-aws-marketplace-for-machine-learning.html) in the *User Guide for AWS Marketplace Providers*.

![\[The seller's workflow in SageMaker AI.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/seller-flow.png)


## Topics
<a name="sagemaker-mkt-sell-topics"></a>
+ [Develop Algorithms and Models in Amazon SageMaker AI](sagemaker-marketplace-develop.md)
+ [Creation of Algorithm and Model Package Resources](sagemaker-mkt-create.md)
+ [List Your Algorithm or Model Package on AWS Marketplace](sagemaker-mkt-list.md)

# Develop Algorithms and Models in Amazon SageMaker AI
<a name="sagemaker-marketplace-develop"></a>

Before you can create algorithm and model package resources to use in Amazon SageMaker AI or list on AWS Marketplace, you have to develop them and package them in Docker containers.

**Note**  
When algorithms and model packages are created for listing on AWS Marketplace, SageMaker AI scans the containers for security vulnerabilities on supported operating systems.   
Only the following operating system versions are supported:  
Debian: 6.0, 7, 8, 9, 10
Ubuntu: 12.04, 12.10, 13.04, 14.04, 14.10, 15.04, 15.10, 16.04, 16.10, 17.04, 17.10, 18.04, 18.10
CentOS: 5, 6, 7
Oracle Linux: 5, 6, 7
Alpine: 3.3, 3.4, 3.5
Amazon Linux

**Topics**
+ [Develop Algorithms in SageMaker AI](#sagmeaker-mkt-develop-algo)
+ [Develop Models in SageMaker AI](#sagemaker-mkt-develop-model)

## Develop Algorithms in SageMaker AI
<a name="sagmeaker-mkt-develop-algo"></a>

An algorithm should be packaged as a docker container and stored in Amazon ECR to use it in SageMaker AI. The Docker container contains the training code used to run training jobs and, optionally, the inference code used to get inferences from models trained by using the algorithm.

For information about developing algorithms in SageMaker AI and packaging them as containers, see [Docker containers for training and deploying models](docker-containers.md). For a complete example of how to create an algorithm container, see the sample notebook at [https://sagemaker-examples.readthedocs.io/en/latest/advanced\$1functionality/scikit\$1bring\$1your\$1own/scikit\$1bring\$1your\$1own.html](https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.html). You can also find the sample notebook in a SageMaker notebook instance. The notebook is in the **Advanced Functionality** section, and is named `scikit_bring_your_own.ipynb`. 

Always thoroughly test your algorithms before you create algorithm resources to publish on AWS Marketplace.

**Note**  
When a buyer subscribes to your containerized product, the Docker containers run in an isolated (internet-free) environment. When you create your containers, do not rely on making outgoing calls over the internet. Calls to AWS services are also not allowed.

## Develop Models in SageMaker AI
<a name="sagemaker-mkt-develop-model"></a>

A deployable model in SageMaker AI consists of inference code, model artifacts, an IAM role that is used to access resources, and other information required to deploy the model in SageMaker AI. Model artifacts are the results of training a model by using a machine learning algorithm. The inference code must be packaged in a Docker container and stored in Amazon ECR. You can either package the model artifacts in the same container as the inference code, or store them in Amazon S3. 

You create a model by running a training job in SageMaker AI, or by training a machine learning algorithm outside of SageMaker AI. If you run a training job in SageMaker AI, the resulting model artifacts are available in the `ModelArtifacts` field in the response to a call to the [DescribeTrainingJob](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeTrainingJob.html) operation. For information about how to develop a SageMaker AI model container, see [Containers with custom inference code](your-algorithms-inference-main.md). For a complete example of how to create a model container from a model trained outside of SageMaker AI, see the sample notebook at [https://sagemaker-examples.readthedocs.io/en/latest/advanced\$1functionality/xgboost\$1bring\$1your\$1own\$1model/xgboost\$1bring\$1your\$1own\$1model.html](https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/xgboost_bring_your_own_model/xgboost_bring_your_own_model.html).

Always thoroughly test your models before you create model packages to publish on AWS Marketplace.

**Note**  
When a buyer subscribes to your containerized product, the Docker containers run in an isolated (internet-free) environment. When you create your containers, do not rely on making outgoing calls over the internet. Calls to AWS services are also not allowed.

# List Your Algorithm or Model Package on AWS Marketplace
<a name="sagemaker-mkt-list"></a>

After creating and validating your algorithm or model in Amazon SageMaker AI, list your product on AWS Marketplace. The listing process makes your products available in the AWS Marketplace and the SageMaker AI console. 

To list products on AWS Marketplace, you must be a registered seller. To register, use the self-registration process from the AWS Marketplace Management Portal (AMMP). For information, see [Getting Started as a Seller](https://docs.aws.amazon.com/marketplace/latest/userguide/user-guide-for-sellers.html) in the *User Guide for AWS Marketplace Providers*. When you start the product listing process from the Amazon SageMaker AI console, we check your seller registration status. If you have not registered, we direct you to do so.

To start the listing process, do one of the following:
+ From the SageMaker AI console, choose the product, choose **Actions**, and choose **Publish new ML Marketplace listing**. This carries over your product reference, the Amazon Resource Name (ARN), and directs you to the AMMP to create the listing.
+ Go to [ML listing process](https://aws.amazon.com/marketplace/management/ml-products/), manually enter the Amazon Resource Name (ARN), and start your product listing. This process carries over the product metadata that you entered when creating the product in SageMaker AI. For an algorithm listing, the information includes the supported instance types and hyperparameters. In addition, you can enter a product description, promotional information, and support information as you would with other AWS Marketplace products.

# Find and Subscribe to Algorithms and Model Packages on AWS Marketplace
<a name="sagemaker-mkt-find-subscribe"></a>

With AWS Marketplace, you can browse and search for hundreds of machine learning algorithms and models in a broad range of categories, such as computer vision, natural language processing, speech recognition, text, data, voice, image, video analysis, fraud detection, predictive analysis, and more.

![\[The buyer workflow.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/buyer-flow.png)


**To find algorithms on AWS Marketplace**

1. Open the Amazon SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Choose **Algorithms**, then choose **Find algorithms**.

   This takes you to the AWS Marketplace algorithms page. For information about finding and subscribing to algorithms on AWS Marketplace, see [Machine Learning Products](https://docs.aws.amazon.com/marketplace/latest/buyerguide/aws-machine-learning-marketplace.html) in the *AWS Marketplace User Guide for AWS Consumers*.

**To find model packages on AWS Marketplace**

1. Open the SageMaker AI console at [https://console.aws.amazon.com/sagemaker/](https://console.aws.amazon.com/sagemaker/).

1. Choose **Model packages**, then choose **Find model packages**.

   This takes you to the AWS Marketplace model packages page. For information about finding and subscribing to model packages on AWS Marketplace, see [Machine Learning Products](https://docs.aws.amazon.com/marketplace/latest/buyerguide/aws-machine-learning-marketplace.html) in the *AWS Marketplace User Guide for AWS Consumers*.

## Use Algorithms and Model Packages
<a name="sagemaker-mkt-how-to-use"></a>

For information about using algorithms and model packages that you subscribe to in SageMaker AI, see [Usage of Algorithm and Model Package Resources](sagemaker-mkt-buy.md).

**Note**  
When you create a training job, inference endpoint, and batch transform job from an algorithm or model package that you subscribe to on AWS Marketplace, the training and inference containers do not have access to the internet. Because the containers do not have access to the internet, the seller of the algorithm or model package does not have access to your data.