

# Getting started with Amazon Rekognition Custom Labels
Getting started

Before starting these *Getting started* instructions, we recommend that you read [Understanding Amazon Rekognition Custom Labels](understanding-custom-labels.md).

You use Amazon Rekognition Custom Labels to train a machine learning model. The trained model analyzes images to find the objects, scenes, and concepts that are unique to your business needs. For example, you can train a model to classify images of houses, or find the location of electronic parts on a printed circuit board.

To help you get started, Amazon Rekognition Custom Labels includes tutorial videos and example projects.

**Note**  
For information about the AWS Regions and endpoints that Amazon Rekognition Custom Labels supports, see [Rekognition endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/rekognition.html).

## Tutorial videos


The videos show you how to use Amazon Rekognition Custom Labels to train and use a model.

**To view the tutorial videos**

1. Sign in to the AWS Management Console and open the Amazon Rekognition console at [https://console.aws.amazon.com/rekognition/](https://console.aws.amazon.com/rekognition/).

1. In the left pane, choose **Use Custom Labels**. The Amazon Rekognition Custom Labels landing page is shown. If you don't see **Use Custom Labels**, check that the [AWS Region](https://docs.aws.amazon.com/general/latest/gr/rekognition_region.html) you are using supports Amazon Rekognition Custom Labels. 

1. In the navigation pane, choose **Get started**. 

1. In **What is Amazon Rekognition Custom Labels?**, choose the video to watch the overview video. 

1. In the navigation pane, choose **Tutorials**.

1. On the **Tutorials** page, choose the tutorial videos that you want to watch.

## Example projects


Amazon Rekognition Custom Labels provides the following example projects. 

### Image classification


The image classification project (Rooms) trains a model that finds one or more household locations in an image, such as *backyard*, *kitchen*, and *patio*. The training and test images represent a single location. Each image is labeled with a single image-level label, such as *kitchen*, *patio*, or *living\$1space*. For an analyzed image, the trained model returns one or more matching labels from the set of image-level labels used for training. For example, the model might find the label *living\$1space* in the following image. For more information, see [Find objects, scenes, and concepts](md-dataset-purpose.md#md-dataset-purpose-classification). 

![\[Living room with fireplace, plush sofa, armchair, round tables, plants, and large windows overlooking outdoors.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/image-classification.jpg)


### Multi-label image classification


The multi-label image classification project (Flowers) trains a model that categorizes images of flowers into three concepts (flower type, leaf presence, and growth stage). 

The training and test images have image-level labels for each concept, such as *camellia* for a flower type, *with\$1leaves* for a flower with leaves, and *fully\$1grown* for a flower that is fully grown.

For an analyzed image, the trained model returns matching labels from the set of image-level labels used for training. For example, the model returns the labels *mediterranean\$1spurge* and *with\$1leaves* for the following image. For more information, see [Find objects, scenes, and concepts](md-dataset-purpose.md#md-dataset-purpose-classification). 

![\[Close-up of a vibrant green flower with tightly packed petals forming a spherical shape.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/multi-label-classification.jpg)


### Brand detection


The brand detection project (Logos) trains a model that model finds the location of certain AWS logos such as *Amazon Textract*, and *AWS lambda*. The training images are of the logo only and have a single image level-label, such as *lambda* or *textract*. It is also possible to train a brand detection model with training images that have bounding boxes for brand locations. The test images have labeled bounding boxes that represent the location of logos in natural locations, such as an architectural diagram. The trained model finds the logos and returns a labeled bounding box for each logo found. For more information, see [Find brand locations](md-dataset-purpose.md#md-dataset-purpose-brands). 

![\[Lambda service feeding user activity into Amazon Pinpoint for recommendations.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/brand-detection-lambda.png)


### Object localization


The object localization project (Circuit boards) trains a model that finds the location of parts on a printed circuit board, such as a *comparator* or an *infra red light emitting diode*. The training and test images include bounding boxes that surround the circuit board parts and a label that identifies the part within the bounding box. In the following example image, the label names are *ir\$1phototransistor*, *ir\$1led*, *pot\$1resistor*, and *comparator*. The trained model finds the circuit board parts and returns a labeled bounding for each circuit part found. For more information, see [Find object locations](md-dataset-purpose.md#md-dataset-purpose-localization). 

![\[Component image showing an IR LED, pot resistor, and comparator chip on a circuit board.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/localization-circuit-board.png)


## Using the example projects


These Getting Started instructions show you how to train a model by using example projects that Amazon Rekognition Custom Labels creates for you. It also shows you how to start the model and use it to analyze an image. 

### Creating the example project


To get started, decide which project to use. For more information, see [Step 1: Choose an example project](gs-step-choose-example-project.md).

Amazon Rekognition Custom Labels uses datasets to train and evaluate (test) a model. A dataset manages images and the labels that identify the contents of images. The example projects include a training dataset and a test dataset in which all images are labeled. You don't need to make any changes before training your model. The example projects show the two ways in which Amazon Rekognition Custom Labels uses labels to train different types of models.
+ *image-level* – The label identifies an object, scene, or concept that represents the entire image. 
+ *bounding box* – The label identifies the contents of a bounding box. A bounding box is a set of image coordinates that surround an object in an image. 

Later, when you create a project with your own images, you must create training and test datasets, and also label your images. For more information, see [Decide your model type](understanding-custom-labels.md#tm-intro-model-type). 

### Training the model


After Amazon Rekognition Custom Labels creates the example project, you can train the model. For more information, see [Step 2: Train your model](gs-step-train-model.md). After training finishes, you normally evaluate the performance of the model. The images in the example dataset already create a high-performance model, and you don't need to evaluate the model before running the model. For more information, see [Improving a trained Amazon Rekognition Custom Labels model](improving-model.md). 

### Using the model


Next you start the model. For more information, see [Step 3: Start your model](gs-step-start-model.md). 

After you start running your model, you can use it to analyze new images. For more information, see [Step 4: Analyze an image with your model](gs-step-get-a-prediction.md).

You are charged for the amount of time that your model runs. When you finish using the example model, you should stop the model. For more information, see [Step 5: Stop your model](gs-step-stop-model.md).

### Next steps


When you're ready, you can create your own projects. For more information, see [Step 6: Next steps](gs-step-next.md). 

# Step 1: Choose an example project


In this step you use choose an example project. Amazon Rekognition Custom Labels then creates a project and a dataset for you. A project manages the files used to train your model. For more information, see [Managing an Amazon Rekognition Custom Labels project](managing-project.md). Datasets contain the images, assigned labels, and bounding boxes that you use to train and test a model. For more information, see [Managing datasets](managing-dataset.md). 

For information about the example projects, see [Example projects](getting-started.md#gs-example-projects).

**Choose an example project**

1. Sign in to the AWS Management Console and open the Amazon Rekognition console at [https://console.aws.amazon.com/rekognition/](https://console.aws.amazon.com/rekognition/).

1. In the left pane, choose **Use Custom Labels**. The Amazon Rekognition Custom Labels landing page is shown. If you don't see **Use Custom Labels**, check that the [AWS Region](https://docs.aws.amazon.com/general/latest/gr/rekognition_region.html) you are using supports Amazon Rekognition Custom Labels.

1. Choose **Get started**. 

   Amazon Rekognition Custom Labels section showing Get started, Tutorials with "Example projects" highlighted, Projects, and Datasets.  
![\[Amazon Rekognition Custom Labels section showing Get started, Tutorials with "Example projects" highlighted, Projects, and Datasets.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/example-projects.png)

1. In **Explore example projects**, choose **Try example projects**.

1. Decide which project you want to use and choose **Create project "*project name*"** within the example section. Amazon Rekognition Custom Labels then creates the example project for you.
**Note**  
If this is the first time that you've opened the console in the current AWS Region, the **First Time Set Up** dialog box is shown. Do the following:  
Note the name of the Amazon S3 bucket that's shown.
Choose **Continue** to let Amazon Rekognition Custom Labels create an Amazon S3 bucket (console bucket) on your behalf. The image of the console below shows examples with "Create project" buttons for Image Classification (Rooms), Multi-label classification (Flowers), Brand detection (Logos), and Object Localization (Circuit boards).  
![\[Amazon Rekognition service examples with "Create project" buttons for Image Classification (Rooms), Multi-label classification (Flowers), Brand detection (Logos), and Object Localization (Circuit boards).\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started.jpg)

1. After your project is ready, choose **Go to dataset**. The following image shows what the project panel looks like when the project is ready.  
![\[Project rooms status panel with "Go to dataset" button for accessing data after model training is complete.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-goto-dataset-dialog.jpg)

# Step 2: Train your model


In this step you train your model. The training and test datasets are automatically configured for you. After training successfully completes, you can see the overall evaluation results, and evaluation results for individual test images. For more information, see [Training an Amazon Rekognition Custom Labels model](training-model.md). 

**To train your model**

1. On the dataset page, choose the **Train model**. The following image shows the console with the train model button.  
![\[Console interface for rooms dataset with the Train model button to begin training a model.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-train-model.jpg)

1. On the **Train model** page, Choose **Train model**. The image belows shows the **Train model** button, notice that the Amazon Resource Name (ARN) for your project is in the **Choose project** edit box.   
![\[Train model page with project ARN input field and Train model button.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/tutorial-train-model-page-train-model.jpg)

1. In the **Do you want to train your model?** dialog box, shown in the following image, choose **Train model**.   
![\[Dialog box to start model training with Cancel and Train model buttons.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/tutorial-dialog-train-model.jpg)

1. After training completes, choose the model name. Training is finished when the model status is **TRAINING\$1COMPLETED**, as demonstrated in the following console screenshot.  
![\[Model training interface showing completed status for model named "rooms_19.2021-07-13T10:36:30" with performance score 0.902 and status "TRAINING_COMPLETED".\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-choose-model.jpg)

1. Choose the **Evaluate** button to see the evaluation results. For information about evaluating a model, see [Improving a trained Amazon Rekognition Custom Labels model](improving-model.md).

1. Choose **View test results** to see the results for individual test images. As seen in the following screenshot, the evaluation dashboard shows metrics such as F1 score, precision, and recall for each label along with number of test images. Overall metrics like average, precision, and recall are also displayed.  
![\[Model evaluation results showing performance metrics across 10 labels.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-training-results.jpg)

1. After viewing the test results, choose the model name to return to the model page. The following screenshot of the performance dashboard where you can click to the return to the model page.  
![\[Two example images from test results with predicted labels and confidence scores, and a breadcrumb link to return to the model page.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-image-test-results.jpg)

# Step 3: Start your model


In this step you start your model. After your model starts, you can use it to analyze images.

You are charged for the amount of time that your model runs. Stop your model if you don't need to analyze images. You can restart your model at a later time. For more information, see [Running a trained Amazon Rekognition Custom Labels model](running-model.md). 

**To start your model**

1. Choose the **Use model** tab on the model page.

1. In the **Start or stop model** section do the following:

   1. Choose **Start**.

   1. In the **Start model** dialog box, choose **Start**. The following image shows the Start button in the model control panel.  
![\[Start model control panel with the Start button and an option to select one inference unit.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-start-model.jpg)

1. Wait until the model is running. The following screenshot shows the console while the model is running, where the status in the **Start or stop model** section is **Running**.  
![\[Model status showing as Running, with Stop button to stop the running model.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-start-model-running.jpg)

1. Use your model to classify images. For more information, see [Step 4: Analyze an image with your model](gs-step-get-a-prediction.md).

# Step 4: Analyze an image with your model


You analyze an image by calling the [DetectCustomLabels](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectCustomLabels) API. In this step, you use the `detect-custom-labels` AWS Command Line Interface (AWS CLI) command to analyze an example image. You get the AWS CLI command from the Amazon Rekognition Custom Labels console. The console configures the AWS CLI command to use your model. You only need to supply an image that's stored in an Amazon S3 bucket. This topic provides an image that you can use for each example project. 

**Note**  
The console also provides Python example code.

The output from `detect-custom-labels` includes a list of labels found in the image, bounding boxes (if the model finds object locations), and the confidence that the model has in the accuracy of the predictions.

For more information, see [Analyzing an image with a trained model](detecting-custom-labels.md).

**To analyze an image (console)**

1. <textobject><phrase>Model status showing as Running, with Stop button to stop the running model.</phrase></textobject>

   If you haven't already, set up the AWS CLI. For instructions, see [Step 4: Set up the AWS CLI and AWS SDKs](su-awscli-sdk.md).

1. If you haven't already, start running your model. For more information, see [Step 3: Start your model](gs-step-start-model.md).

1. Choose the **Use Model** tab and then choose **API code**. The model status panel shown below shows the model as Running, with a Stop button to stop the running model, and an option to display the API.  
![\[Model status showing as Running, with Stop button to stop the running model.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-use-model-api-code.png)

1. Choose **AWS CLI command**.

1. In the **Analyze image** section, copy the AWS CLI command that calls `detect-custom-labels`. The following image of the Rekognition console shows the "Analyze Image" section with the AWS CLI command to detect custom labels on an image using a machine learning model, and instructions to start the model and provide image details.  
![\[Console screenshot with the AWS CLI command to detect custom labels on an image using a machine learning model, and instructions to start the model and provide image details.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-cli-code-analyze.png)

1. Upload an example image to an Amazon S3 bucket. For instructions, see [Getting an example image](#gs-example-images).

1. At the command prompt, enter the AWS CLI command that you copied in the previous step. It should look like the following example. 

   The value of `--project-version-arn` should be Amazon Resource Name (ARN) of your model. The value of `--region` should be the AWS Region in which you created the model.

   Change `MY_BUCKET` and `PATH_TO_MY_IMAGE` to the Amazon S3 bucket and image that you used in the previous step. 

   If you are using the [custom-labels-access](su-sdk-programmatic-access.md#su-sdk-programmatic-access-customlabels-examples) profile to get credentials, add the `--profile custom-labels-access` parameter.

   ```
   aws rekognition detect-custom-labels \
     --project-version-arn "model_arn" \
     --image '{"S3Object": {"Bucket": "MY_BUCKET","Name": "PATH_TO_MY_IMAGE"}}' \
     --region us-east-1 \
     --profile custom-labels-access
   ```

   If the model finds objects, scenes, and concepts, the JSON output from the AWS CLI command should look similar to the following. `Name` is the name of the image-level label that the model found. `Confidence` (0-100) is the model's confidence in the accuracy of the prediction.

   ```
   {
       "CustomLabels": [
           {
               "Name": "living_space",
               "Confidence": 83.41299819946289
           }
       ]
   }
   ```

   If the model finds object locations or finds brand, labeled bounding boxes are returned. `BoundingBox` contains the location of a box that surrounds the object. `Name` is the object that the model found in the bounding box. `Confidence` is the model's confidence that the bounding box contains the object. 

   ```
   {
       "CustomLabels": [
           {
               "Name": "textract",
               "Confidence": 87.7729721069336,
               "Geometry": {
                   "BoundingBox": {
                       "Width": 0.198987677693367,
                       "Height": 0.31296101212501526,
                       "Left": 0.07924537360668182,
                       "Top": 0.4037395715713501
                   }
               }
           }
       ]
   }
   ```

1. Continue to use the model to analyze other images. Stop the model if you are no longer using it. For more information, see [Step 5: Stop your model](gs-step-stop-model.md).

## Getting an example image


You can use the following images with the `DetectCustomLabels` operation. There is one image for each project. To use the images, you upload them to an S3 bucket. 

**To use an example image**

1. Right-click the following image that matches the example project that you are using. Then choose **Save image** to save the image to your computer. The menu option might be different, depending on which browser you are using.

1. Upload the image to an Amazon S3 bucket that's owned by your AWS account and is in the same AWS region in which you are using Amazon Rekognition Custom Labels.

   For instructions, see [Uploading Objects into Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UploadingObjectsintoAmazonS3.html) in the *Amazon Simple Storage Service User Guide*.

### Image classification


![\[Living room with fireplace, couch, armchair, end tables, lamps, and large windows.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/image-classification.jpg)


### Multi-label classification


![\[Spherical green flower head composed of densely packed overlapping petals or bracts forming a ball-like shape.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/multi-label-classification.jpg)


### Brand detection


![\[Diagram showing user activity data flowing from Lambda to Amazon Personalize for recommendations, and to Amazon Pinpoint for recommendations.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/brand-detection.png)


### Object localization


![\[Small circuit with various electronic components, and connector pins.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/object-localization.jpg)


# Step 5: Stop your model


In this step you stop running your model. You are charged for the amount of time your model is running. If you have finished using the model, you should stop it.

**To stop your model**

1. In the **Start or stop model** section choose **Stop**.  
![\[Console screenshot including Stop button to stop the running custom label detection model.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-stop-model.jpg)

1. In the **Stop model** dialog box, enter **stop** to confirm that you want to stop the model.  
![\[Stop model dialog with text field to enter "stop" and confirm stopping the model.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-stop-model-dialog.jpg)

1. Choose **Stop** to stop your model. The model has stopped when the status in the **Start or stop model** section is **Stopped**. In the following screenshot, the User interface section has the option to start or stop a machine learning model. The model's status shows as "Stopped" with a "Start" button to start the model and a dropdown to select the number of inference units.  
![\[User interface section to start or stop a machine learning model, showing the model's status as "Stopped" with a "Start" button to start the model and a dropdown to select the number of inference units.\]](http://docs.aws.amazon.com/rekognition/latest/customlabels-dg/images/get-started-stopped-model.jpg)

# Step 6: Next steps


After you finished trying the examples projects, you can use your own images and datasets to create your own model. For more information, see [Understanding Amazon Rekognition Custom Labels](understanding-custom-labels.md).

Use the labeling information in the following table to train models similar to the example projects.


| Example | Training images | Test images | 
| --- | --- | --- | 
|  Image classification (Rooms)  |  1 Image-level label per image  |  1 Image-level label per image   | 
|  Multi-label classification (Flowers)  |  Multiple image-level labels per image  |  Multiple image-level labels per image  | 
|  Brand detection (Logos)  |  image level-labels (you can also use Labeled bounding boxes)  |  Labeled bounding boxes  | 
|  Image localization (Circuit boards)  |  Labeled bounding boxes  |  Labeled bounding boxes  | 

The [Classifying images](tutorial-classification.md) shows you how to create a project, datasets, and models for an Image classification model.

For detailed information about creating datasets and training models, see [Creating an Amazon Rekognition Custom Labels model](creating-model.md).