

# Video frames
<a name="sms-video-task-types"></a>

You can use Ground Truth built-in video frame task types to have workers annotate video frames using bounding boxes, polylines, polygons or keypoints. A *video frame* is a sequence of images that have been extracted from a video.

If you do not have video frames, you can provide video files (MP4 files) and use the Ground Truth automated frame extraction tool to extract video frames. To learn more, see [Provide Video Files](sms-point-cloud-video-input-data.md#sms-point-cloud-video-frame-extraction).

You can use the following built-in video task types to create video frame labeling jobs using the Amazon SageMaker AI console, API, and language-specific SDKs.
+ **Video frame object detection** – Use this task type when you want workers to identify and locate objects in sequences of video frames. You provide a list of categories, and workers can select one category at a time and annotate objects which the category applies to in all frames. For example, you can use this task to ask workers to identify and localize various objects in a scene, such as cars, bikes, and pedestrians.
+ **Video frame object tracking** – Use this task type when you want workers to track the movement of instances of objects across sequences of video frames. When a worker adds an annotation to a single frame, that annotation is associated with a unique instance ID. The worker adds annotations associated with the same ID in all other frames to identify the same object or person. For example, a worker can track the movement of a vehicle across a sequences of video frames by drawing bounding boxes associated with the same ID around the vehicle in each frame that it appears. 

Use the following topics to learn more about these built-in task types and to how to create a labeling job using each task type. See [Task types](sms-video-overview.md#sms-video-frame-tools) to learn more about the annotations tools (bounding boxes, polylines, polygons and keypoints) available for these task types.

Before you create a labeling job, we recommend that you review [Video frame labeling job reference](sms-video-overview.md).

**Topics**
+ [Identify objects using video frame object detection](sms-video-object-detection.md)
+ [Track objects in video frames using video frame object tracking](sms-video-object-tracking.md)
+ [Video frame labeling job reference](sms-video-overview.md)

# Identify objects using video frame object detection
<a name="sms-video-object-detection"></a>

You can use the video frame object detection task type to have workers identify and locate objects in a sequence of video frames (images extracted from a video) using bounding boxes, polylines, polygons or keypoint *annotation tools*. The tool you choose defines the video frame task type you create. For example, you can use a bounding box video frame object detection task type workers to identify and localize various objects in a series of video frames, such as cars, bikes, and pedestrians. You can create a video frame object detection labeling job using the Amazon SageMaker AI Ground Truth console, the SageMaker API, and language-specific AWS SDKs. To learn more, see [Create a Video Frame Object Detection Labeling Job](#sms-video-od-create-labeling-job) and select your preferred method. See [Task types](sms-video-overview.md#sms-video-frame-tools) to learn more about the annotations tools you can choose from when you create a labeling job.

Ground Truth provides a worker UI and tools to complete your labeling job tasks: [Preview the Worker UI](#sms-video-od-worker-ui).

You can create a job to adjust annotations created in a video object detection labeling job using the video object detection adjustment task type. To learn more, see [Create Video Frame Object Detection Adjustment or Verification Labeling Job](#sms-video-od-adjustment).

## Preview the Worker UI
<a name="sms-video-od-worker-ui"></a>

Ground Truth provides workers with a web user interface (UI) to complete your video frame object detection annotation tasks. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new user, we recommend that you create a labeling job through the console using a small input dataset to preview the worker UI and ensure your video frames, labels, and label attributes appear as expected. 

The UI provides workers with the following assistive labeling tools to complete your object detection tasks:
+ For all tasks, workers can use the **Copy to next** and **Copy to all** features to copy an annotation to the next frame or to all subsequent frames respectively. 
+ For tasks that include the bounding box tools, workers can use a **Predict next** feature to draw a bounding box in a single frame, and then have Ground Truth predict the location of boxes with the same label in all other frames. Workers can then make adjustments to correct predicted box locations. 

The following video shows how a worker might use the worker UI with the bounding box tool to complete your object detection tasks.

![\[Gif showing how a worker can use the bounding box tool for their object detection tasks.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms/video/kitti-od-general-labeling-job.gif)


## Create a Video Frame Object Detection Labeling Job
<a name="sms-video-od-create-labeling-job"></a>

You can create a video frame object detection labeling job using the SageMaker AI console or the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html) API operation. 

This section assumes that you have reviewed the [Video frame labeling job reference](sms-video-overview.md) and have chosen the type of input data and the input dataset connection you are using. 

### Create a Labeling Job (Console)
<a name="sms-video-od-create-labeling-job-console"></a>

You can follow the instructions in [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) to learn how to create a video frame object tracking job in the SageMaker AI console. In step 10, choose **Video - Object detection** from the **Task category** dropdown list. Select the task type you want by selecting one of the cards in **Task selection**.

![\[Gif showing how to create a video frame object tracking job in the SageMaker AI console.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms/video/task-type-vod.gif)


### Create a Labeling Job (API)
<a name="sms-video-od-create-labeling-job-api"></a>

You create an object detection labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md) provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/VideoObjectDetection`. Replace `<region>` with the AWS Region in which you are creating the labeling job. 

  Do not include an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `video-od-labels-ref`. 
+ Your input manifest file must be a video frame sequence manifest file. You can create this manifest file using the SageMaker AI console, or create it manually and upload it to Amazon S3. For more information, see [Input Data Setup](sms-video-data-setup.md). 
+ You can only use private or vendor work teams to create video frame object detection labeling jobs. 
+ You specify your labels, label category and frame attributes, the task type, and worker instructions in a label category configuration file. Specify the task type (bounding boxes, polylines, polygons or keypoint) using `annotationType` in your label category configuration file. For more information, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md) to learn how to create this file. 
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region in which you are creating your labeling job to find the correct ARN that ends with `PRE-VideoObjectDetection`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region in which you are creating your labeling job to find the correct ARN that ends with `ACS-VideoObjectDetection`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` must be `1`. 
+ Automated data labeling is not supported for video frame labeling jobs. Do not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ Video frame object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

The following is an example of an [AWS Python SDK (Boto3) request](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_labeling_job) to create a labeling job in the US East (N. Virginia) Region. 

```
response = client.create_labeling_job(
    LabelingJobName='example-video-od-labeling-job,
    LabelAttributeName='label',
    InputConfig={
        'DataSource': {
            'S3DataSource': {
                'ManifestS3Uri': 's3://amzn-s3-demo-bucket/path/video-frame-sequence-input-manifest.json'
            }
        },
        'DataAttributes': {
            'ContentClassifiers': [
                'FreeOfPersonallyIdentifiableInformation'|'FreeOfAdultContent',
            ]
        }
    },
    OutputConfig={
        'S3OutputPath': 's3://amzn-s3-demo-bucket/prefix/file-to-store-output-data',
        'KmsKeyId': 'string'
    },
    RoleArn='arn:aws:iam::*:role/*,
    LabelCategoryConfigS3Uri='s3://bucket/prefix/label-categories.json',
    StoppingConditions={
        'MaxHumanLabeledObjectCount': 123,
        'MaxPercentageOfInputDatasetLabeled': 123
    },
    HumanTaskConfig={
        'WorkteamArn': 'arn:aws:sagemaker:us-east-1:*:workteam/private-crowd/*',
        'UiConfig': {
            'HumanTaskUiArn: 'arn:aws:sagemaker:us-east-1:394669845002:human-task-ui/VideoObjectDetection'
        },
        'PreHumanTaskLambdaArn': 'arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoObjectDetection',
        'TaskKeywords': [
            'Video Frame Object Detection',
        ],
        'TaskTitle': 'Video frame object detection task',
        'TaskDescription': 'Classify and identify the location of objects and people in video frames',
        'NumberOfHumanWorkersPerDataObject': 123,
        'TaskTimeLimitInSeconds': 123,
        'TaskAvailabilityLifetimeInSeconds': 123,
        'MaxConcurrentTaskCount': 123,
        'AnnotationConsolidationConfig': {
            'AnnotationConsolidationLambdaArn': 'arn:aws:lambda:us-east-1:432418664414:function:ACS-VideoObjectDetection'
        },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
```

## Create Video Frame Object Detection Adjustment or Verification Labeling Job
<a name="sms-video-od-adjustment"></a>

You can create an adjustment and verification labeling job using the Ground Truth console or `CreateLabelingJob` API. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

## Output Data Format
<a name="sms-video-od-output-data"></a>

When you create a video frame object detection labeling job, tasks are sent to workers. When these workers complete their tasks, labels are written to the Amazon S3 output location you specified when you created the labeling job. To learn about the video frame object detection output data format, see [Video frame object detection output](sms-data-output.md#sms-output-video-object-detection). If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. 

# Track objects in video frames using video frame object tracking
<a name="sms-video-object-tracking"></a>

You can use the video frame object tracking task type to have workers track the movement of objects in a sequence of video frames (images extracted from a video) using bounding boxes, polylines, polygons or keypoint *annotation tools*. The tool you choose defines the video frame task type you create. For example, you can use a bounding box video frame object tracking task type to ask workers to track the movement of objects, such as cars, bikes, and pedestrians by drawing boxes around them. 

You provide a list of categories, and each annotation that a worker adds to a video frame is identified as an *instance* of that category using an instance ID. For example, if you provide the label category car, the first car that a worker annotates will have the instance ID car:1. The second car the worker annotates will have the instance ID car:2. To track an object's movement, the worker adds annotations associated with the same instance ID around to object in all frames. 

You can create a video frame object tracking labeling job using the Amazon SageMaker AI Ground Truth console, the SageMaker API, and language-specific AWS SDKs. To learn more, see [Create a Video Frame Object Detection Labeling Job](sms-video-object-detection.md#sms-video-od-create-labeling-job) and select your preferred method. See [Task types](sms-video-overview.md#sms-video-frame-tools) to learn more about the annotations tools you can choose from when you create a labeling job.

Ground Truth provides a worker UI and tools to complete your labeling job tasks: [Preview the Worker UI](sms-video-object-detection.md#sms-video-od-worker-ui).

You can create a job to adjust annotations created in a video object detection labeling job using the video object detection adjustment task type. To learn more, see [Create Video Frame Object Detection Adjustment or Verification Labeling Job](sms-video-object-detection.md#sms-video-od-adjustment).

## Preview the Worker UI
<a name="sms-video-ot-worker-ui"></a>

Ground Truth provides workers with a web user interface (UI) to complete your video frame object tracking annotation tasks. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new user, we recommend that you create a labeling job through the console using a small input dataset to preview the worker UI and ensure your video frames, labels, and label attributes appear as expected. 

The UI provides workers with the following assistive labeling tools to complete your object tracking tasks:
+ For all tasks, workers can use the **Copy to next** and **Copy to all** features to copy an annotation with the same unique ID to the next frame or to all subsequent frames respectively. 
+ For tasks that include the bounding box tools, workers can use a **Predict next** feature to draw a bounding box in a single frame, and then have Ground Truth predict the location of boxes with the same unique ID in all other frames. Workers can then make adjustments to correct predicted box locations. 

The following video shows how a worker might use the worker UI with the bounding box tool to complete your object tracking tasks.

![\[Gif showing how a worker can use the bounding box tool with the predict next feature.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms/video/ot_predict_next.gif)


## Create a Video Frame Object Tracking Labeling Job
<a name="sms-video-ot-create-labeling-job"></a>

You can create a video frame object tracking labeling job using the SageMaker AI console or the [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html) API operation. 

This section assumes that you have reviewed the [Video frame labeling job reference](sms-video-overview.md) and have chosen the type of input data and the input dataset connection you are using. 

### Create a Labeling Job (Console)
<a name="sms-video-ot-create-labeling-job-console"></a>

You can follow the instructions in [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) to learn how to create a video frame object tracking job in the SageMaker AI console. In step 10, choose **Video - Object tracking** from the **Task category** dropdown list. Select the task type you want by selecting one of the cards in **Task selection**.

![\[Gif showing how to create a video frame object tracking job in the SageMaker AI console.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms/video/task-type-vot.gif)


### Create a Labeling Job (API)
<a name="sms-video-ot-create-labeling-job-api"></a>

You create an object tracking labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md) provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/VideoObjectTracking`. Replace `<region>` with the AWS Region in which you are creating the labeling job. 

  Do not include an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `ot-labels-ref`. 
+ Your input manifest file must be a video frame sequence manifest file. You can create this manifest file using the SageMaker AI console, or create it manually and upload it to Amazon S3. For more information, see [Input Data Setup](sms-video-data-setup.md). If you create a streaming labeling job, the input manifest file is optional. 
+ You can only use private or vendor work teams to create video frame object detection labeling jobs.
+ You specify your labels, label category and frame attributes, the task type, and worker instructions in a label category configuration file. Specify the task type (bounding boxes, polylines, polygons or keypoint) using `annotationType` in your label category configuration file. For more information, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md) to learn how to create this file. 
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region in which you are creating your labeling job to find the correct ARN that ends with `PRE-VideoObjectTracking`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region in which you are creating your labeling job to find the correct ARN that ends with `ACS-VideoObjectTracking`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` must be `1`. 
+ Automated data labeling is not supported for video frame labeling jobs. Do not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ Video frame object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

The following is an example of an [AWS Python SDK (Boto3) request](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_labeling_job) to create a labeling job in the US East (N. Virginia) Region. 

```
response = client.create_labeling_job(
    LabelingJobName='example-video-ot-labeling-job,
    LabelAttributeName='label',
    InputConfig={
        'DataSource': {
            'S3DataSource': {
                'ManifestS3Uri': 's3://amzn-s3-demo-bucket/path/video-frame-sequence-input-manifest.json'
            }
        },
        'DataAttributes': {
            'ContentClassifiers': [
                'FreeOfPersonallyIdentifiableInformation'|'FreeOfAdultContent',
            ]
        }
    },
    OutputConfig={
        'S3OutputPath': 's3://amzn-s3-demo-bucket/prefix/file-to-store-output-data',
        'KmsKeyId': 'string'
    },
    RoleArn='arn:aws:iam::*:role/*,
    LabelCategoryConfigS3Uri='s3://bucket/prefix/label-categories.json',
    StoppingConditions={
        'MaxHumanLabeledObjectCount': 123,
        'MaxPercentageOfInputDatasetLabeled': 123
    },
    HumanTaskConfig={
        'WorkteamArn': 'arn:aws:sagemaker:us-east-1:*:workteam/private-crowd/*',
        'UiConfig': {
            'HumanTaskUiArn: 'arn:aws:sagemaker:us-east-1:394669845002:human-task-ui/VideoObjectTracking'
        },
        'PreHumanTaskLambdaArn': 'arn:aws:lambda:us-east-1:432418664414:function:PRE-VideoObjectTracking',
        'TaskKeywords': [
            'Video Frame Object Tracking,
        ],
        'TaskTitle': 'Video frame object tracking task',
        'TaskDescription': Tracking the location of objects and people across video frames',
        'NumberOfHumanWorkersPerDataObject': 123,
        'TaskTimeLimitInSeconds': 123,
        'TaskAvailabilityLifetimeInSeconds': 123,
        'MaxConcurrentTaskCount': 123,
        'AnnotationConsolidationConfig': {
            'AnnotationConsolidationLambdaArn': 'arn:aws:lambda:us-east-1:432418664414:function:ACS-VideoObjectTracking'
        },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
```

## Create a Video Frame Object Tracking Adjustment or Verification Labeling Job
<a name="sms-video-ot-adjustment"></a>

You can create an adjustment and verification labeling job using the Ground Truth console or `CreateLabelingJob` API. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

## Output Data Format
<a name="sms-video-ot-output-data"></a>

When you create a video frame object tracking labeling job, tasks are sent to workers. When these workers complete their tasks, labels are written to the Amazon S3 output location you specified when you created the labeling job. To learn about the video frame object tracking output data format, see [Video frame object tracking output](sms-data-output.md#sms-output-video-object-tracking). If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. 

# Video frame labeling job reference
<a name="sms-video-overview"></a>

Use this page to learn about the object detection and object tracking video frame labeling jobs. The information on this page applies to both of these built-in task types. 

The video frame labeling job is unique because of the following:
+ You can either provide data objects that are ready to be annotated (video frames), or you can provide video files and have Ground Truth automatically extract video frames. 
+ Workers have the ability to save work as they go. 
+ You cannot use the Amazon Mechanical Turk workforce to complete your labeling tasks. 
+ Ground Truth provides a worker UI, as well as assistive and basic labeling tools, to help workers complete your tasks. You do not need to provide a worker task template. 

Use the following topics to learn more about video frame labeling jobs.

**Topics**
+ [Input data](#sms-video-input-overview)
+ [Job completion times](#sms-video-job-completion-times)
+ [Task types](#sms-video-frame-tools)
+ [Workforces](#sms-video-workforces)
+ [Worker user interface (UI)](#sms-video-worker-task-ui)
+ [Video frame job permission requirements](#sms-security-permission-video-frame)

## Input data
<a name="sms-video-input-overview"></a>

The video frame labeling job uses *sequences* of video frames. A single sequence is a series of images that have been extracted from a single video. You can either provide your own sequences of video frames, or have Ground Truth automatically extract video frame sequences from your video files. To learn more, see [Provide Video Files](sms-point-cloud-video-input-data.md#sms-point-cloud-video-frame-extraction).

Ground Truth uses sequence files to identify all images in a single sequence. All of the sequences that you want to include in a single labeling job are identified in an input manifest file. Each sequence is used to create a single worker task. You can automatically create sequence files and an input manifest file using Ground Truth automatic data setup. To learn more, see [Set up Automated Video Frame Input Data](sms-video-automated-data-setup.md). 

To learn how to manually create sequence files and an input manifest file, see [Create a Video Frame Input Manifest File](sms-video-manual-data-setup.md#sms-video-create-manifest). 

## Job completion times
<a name="sms-video-job-completion-times"></a>

Video and video frame labeling jobs can take workers hours to complete. You can set the total amount of time that workers can work on each task when you create a labeling job. The maximum time you can set for workers to work on tasks is 7 days. The default value is 3 days. 

We strongly recommend that you create tasks that workers can complete within 12 hours. Workers must keep the worker UI open while working on a task. They can save work as they go and Ground Truth saves their work every 15 minutes.

When using the SageMaker AI `CreateLabelingJob` API operation, set the total time a task is available to workers in the `TaskTimeLimitInSeconds` parameter of `HumanTaskConfig`.

When you create a labeling job in the console, you can specify this time limit when you select your workforce type and your work team.

## Task types
<a name="sms-video-frame-tools"></a>

When you create a video object tracking or video object detection labeling job, you specify the type of annotation that you want workers to create while working on your labeling task. The annotation type determines the type of output data Ground Truth returns and defines the *task type* for your labeling job. 

If you are creating a labeling job using the API operation [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html), you specify the task type using the label category configuration file parameter `annotationType`. To learn more, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md).

The following task types are available for both video object tracking or video object detection labeling jobs: 
+ **Bounding box ** – Workers are provided with tools to create bounding box annotations. A bounding box is a box that a worker draws around an objects to identify the pixel-location and label of that object in the frame. 
+ **Polyline** – Workers are provided with tools to create polyline annotations. A polyline is defined by the series of ordered x, y coordinates. Each point added to the polyline is connected to the previous point by a line. The polyline does not have to be closed (the start point and end point do not have to be the same) and there are no restrictions on the angles formed between lines. 
+ **Polygon ** – Workers are provided with tools to create polygon annotations. A polygon is a closed shape defined by a series of ordered x, y coordinates. Each point added to the polygon is connected to the previous point by a line and there are no restrictions on the angles formed between lines. Two lines (sides) of the polygon cannot cross. The start and end point of a polygon must be the same. 
+ **Keypoint** – Workers are provided with tools to create keypoint annotations. A keypoint is a single point associated with an x, y coordinate in the video frame.

## Workforces
<a name="sms-video-workforces"></a>

When you create a video frame labeling job, you need to specify a work team to complete your annotation tasks. You can choose a work team from a private workforce of your own workers, or from a vendor workforce that you select in the AWS Marketplace. You cannot use the Amazon Mechanical Turk workforce for video frame labeling jobs. 

To learn more about vendor workforces, see [Subscribe to vendor workforces](sms-workforce-management-vendor.md).

To learn how to create and manage a private workforce, see [Private workforce](sms-workforce-private.md).

## Worker user interface (UI)
<a name="sms-video-worker-task-ui"></a>

Ground Truth provides a worker user interface (UI), tools, and assistive labeling features to help workers complete your video labeling tasks. You can preview the worker UI when you create a labeling job in the console.

When you create a labeling job using the API operation `CreateLabelingJob`, you must provide an ARN provided by Ground Truth in the parameter [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UiConfig.html#sagemaker-Type-UiConfig-UiTemplateS3Uri](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UiConfig.html#sagemaker-Type-UiConfig-UiTemplateS3Uri) to specify the worker UI for your task type. You can use `HumanTaskUiArn` with the SageMaker AI [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_RenderUiTemplate.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_RenderUiTemplate.html) API operation to preview the worker UI. 

You provide worker instructions, labels, and optionally, attributes that workers can use to provide more information about labels and video frames. These attributes are referred to as label category attributes and frame attributes respectively. They are all displayed in the worker UI.

### Label category and frame attributes
<a name="sms-video-label-attributes"></a>

When you create a video object tracking or video object detection labeling job, you can add one or more *label category attributes* and *frame attributes*:
+ **Label category attribute** – A list of options (strings), a free form text box, or a numeric field associated with one or more labels. It is used by workers to provide metadata about a label. 
+ **Frame attribute** – A list of options (strings), a free form text box, or a numeric field that appears on each video frame a worker is sent to annotate. It is used by workers to provide metadata about video frames. 

Additionally, you can use label and frame attributes to have workers verify labels in a video frame label verification job. 

Use the following sections to learn more about these attributes. To learn how to add label category and frame attributes to a labeling job, use the **Create Labeling Job** sections on the [task type page](sms-video-task-types.md) of your choice.

#### Label category attributes
<a name="sms-video-label-category-attributes"></a>

Add label category attributes to labels to give workers the ability to provide more information about the annotations they create. A label category attribute is added to an individual label, or to all labels. When a label category attribute is applied to all labels it is referred to as a *global label category attribute*. 

For example, if you add the label category *car*, you might also want to capture additional data about your labeled cars, such as if they are occluded or the size of the car. You can capture this metadata using label category attributes. In this example, if you added the attribute *occluded* to the car label category, you can assign *partial*, *completely*, *no* to the *occluded* attribute and enable workers to select one of these options. 

When you create a label verification job, you add labels category attributes to each label you want workers to verify.

#### Frame level attributes
<a name="sms-video-frame-attributes"></a>

Add frame attributes to give workers the ability to provide more information about individual video frames. Each frame attribute you add appears on all frames. 

For example, you can add a number-frame attribute to have workers identify the number of objects they see in a particular frame. 

In another example, you may want to provide a free-form text box to give workers the ability to provide an answer to a question. 

When you create a label verification job, you can add one or more frame attributes to ask workers to provide feedback on all labels in a video frame.

### Worker instructions
<a name="sms-video-worker-instructions-general"></a>

You can provide worker instructions to help your workers complete your video frame labeling tasks. You might want to cover the following topics when writing your instructions: 
+ Best practices and things to avoid when annotating objects.
+ The label category attributes provided (for object detection and object tracking tasks) and how to use them.
+ How to save time while labeling by using keyboard shortcuts. 

You can add your worker instructions using the SageMaker AI console while creating a labeling job. If you create a labeling job using the API operation `CreateLabelingJob`, you specify worker instructions in your label category configuration file. 

In addition to your instructions, Ground Truth provides a link to help workers navigate and use the worker portal. View these instructions by selecting the task type on [Worker Instructions](sms-video-worker-instructions.md).

### Declining tasks
<a name="sms-decline-task-video"></a>

Workers are able to decline tasks. 

Workers decline a task if the instructions are not clear, input data is not displaying correctly, or if they encounter some other issue with the task. If the number of workers per dataset object ([https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-NumberOfHumanWorkersPerDataObject](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-NumberOfHumanWorkersPerDataObject)) decline the task, the data object is marked as expired and will not be sent to additional workers.

## Video frame job permission requirements
<a name="sms-security-permission-video-frame"></a>

When you create a video frame labeling job, in addition to the permission requirements found in [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md), you must add a CORS policy to your S3 bucket that contains your input manifest file. 

### CORS permission policy for your S3 bucket
<a name="sms-permissions-add-cors-video-frame"></a>

When you create a video frame labeling job, you specify buckets in S3 where your input data and manifest file are located and where your output data will be stored. These buckets may be the same. You must attach the following Cross-origin resource sharing (CORS) policy to your input and output buckets. If you use the Amazon S3 console to add the policy to your bucket, you must use the JSON format.

**JSON**

```
[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET",
            "HEAD",
            "PUT"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": [
            "Access-Control-Allow-Origin"
        ],
        "MaxAgeSeconds": 3000
    }
]
```

**XML**

```
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
```

To learn how to add a CORS policy to an S3 bucket, see [ How do I add cross-domain resource sharing with CORS?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-cors-configuration.html) in the Amazon Simple Storage Service User Guide.