

# 3D Point Cloud Task types
<a name="sms-point-cloud-task-types"></a>

You can use Ground Truth 3D point cloud labeling modality for a variety of use cases. The following list briefly describes each 3D point cloud task type. For additional details and instructions on how to create a labeling job using a specific task type, select the task type name to see its task type page. 
+ [3D point cloud object detection](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-object-detection.html) – Use this task type when you want workers to locate and classify objects in a 3D point cloud by adding and fitting 3D cuboids around objects. 
+ [3D point cloud object tracking](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-object-tracking.html) – Use this task type when you want workers to add and fit 3D cuboids around objects to track their movement across a sequence of 3D point cloud frames. For example, you can use this task type to ask workers to track the movement of vehicles across multiple point cloud frames.
+ [3D point cloud semantic segmentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-semantic-segmentation.html) – Use this task type when you want workers to create a point-level semantic segmentation mask by painting objects in a 3D point cloud using different colors where each color is assigned to one of the classes you specify. 
+  3D point cloud adjustment task types – Each of the task types above has an associated *adjustment* task type that you can use to audit and adjust annotations generated from a 3D point cloud labeling job. Refer to the task type page of the associated type to learn how to create an adjustment labeling job for that task. 

# Classify objects in a 3D point cloud with object detection
<a name="sms-point-cloud-object-detection"></a>

Use this task type when you want workers to classify objects in a 3D point cloud by drawing 3D cuboids around objects. For example, you can use this task type to ask workers to identify different types of objects in a point cloud, such as cars, bikes, and pedestrians. The following page gives important information about the labeling job, as well as steps to create one.

For this task type, the *data object* that workers label is a single point cloud frame. Ground Truth renders a 3D point cloud using point cloud data you provide. You can also provide camera data to give workers more visual information about scenes in the frame, and to help workers draw 3D cuboids around objects. 

Ground Truth providers workers with tools to annotate objects with 9 degrees of freedom (x,y,z,rx,ry,rz,l,w,h) in three dimensions in both 3D scene and projected side views (top, side, and back). If you provide sensor fusion information (like camera data), when a worker adds a cuboid to identify an object in the 3D point cloud, the cuboid shows up and can be modified in the 2D images. After a cuboid has been added, all edits made to that cuboid in the 2D or 3D scene are projected into the other view.

You can create a job to adjust annotations created in a 3D point cloud object detection labeling job using the 3D point cloud object detection adjustment task type. 

If you are a new user of the Ground Truth 3D point cloud labeling modality, we recommend you review [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md). This labeling modality is different from other Ground Truth task types, and this page provides an overview of important details you should be aware of when creating a 3D point cloud labeling job.

**Topics**
+ [View the Worker Task Interface](#sms-point-cloud-object-detection-worker-ui)
+ [Create a 3D Point Cloud Object Detection Labeling Job](#sms-point-cloud-object-detection-create-labeling-job)
+ [Create a 3D Point Cloud Object Detection Adjustment or Verification Labeling Job](#sms-point-cloud-object-detection-adjustment-verification)
+ [Output Data Format](#sms-point-cloud-object-detection-output-data)

## View the Worker Task Interface
<a name="sms-point-cloud-object-detection-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D point cloud object detection annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth worker UI in the `HumanTaskUiArn` parameter. When you create a labeling job using this task type in the console, this worker UI is automatically used. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new user, it is recommended that you create a labeling job using the console to ensure your label attributes, point cloud frames, and if applicable, images, appear as expected. 

The following is a GIF of the 3D point cloud object detection worker task interface. If you provide camera data for sensor fusion in the world coordinate system, images are matched up with scenes in the point cloud frame. These images appear in the worker portal as shown in the following GIF. 

![\[Gif showing how a worker can annotate a 3D point cloud in the Ground Truth worker portal.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_detection/ot_basic_tools.gif)


Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once a worker places a cuboid in the 3D scene, a side-view will appear with the three projected side views: top, side, and back. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse. 

The following video demonstrates movements around the 3D point cloud and in the side-view. 

![\[Gif showing movements around the 3D point cloud and the side-view.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_detection/navigate_od_worker_ui.gif)


Additional view options and features are available in the **View** menu in the worker UI. See the [worker instruction page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-worker-instructions-object-detection) for a comprehensive overview of the Worker UI. 

**Assistive Labeling Tools**  
Ground Truth helps workers annotate 3D point clouds faster and more accurately using machine learning and computer vision powered assistive labeling tools for 3D point cloud object tracking tasks. The following assistive labeling tools are available for this task type:
+ **Snapping** – Workers can add a cuboid around an object and use a keyboard shortcut or menu option to have Ground Truth's autofit tool snap the cuboid tightly around the object. 
+ **Set to ground **– After a worker adds a cuboid to the 3D scene, the worker can automatically snap the cuboid to the ground. For example, the worker can use this feature to snap a cuboid to the road or sidewalk in the scene. 
+ **Multi-view labeling** – After a worker adds a 3D cuboid to the 3D scene, a side panel displays front, side, and top perspectives to help the worker adjust the cuboid tightly around the object. In all of these views, the cuboid includes an arrow that indicates the orientation, or heading of the object. When the worker adjusts the cuboid, the adjustment will appear in real time on all of the views (that is, 3D, top, side, and front). 
+ **Sensor fusion** – If you provide data for sensor fusion, workers can adjust annotations in the 3D scenes and in 2D images, and the annotations will be projected into the other view in real time. Additionally, workers will have the option to view the direction the camera is facing and the camera frustum.
+ **View options **– Enables workers to easily hide or view cuboids, label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

## Create a 3D Point Cloud Object Detection Labeling Job
<a name="sms-point-cloud-object-detection-create-labeling-job"></a>

You can create a 3D point cloud labeling job using the SageMaker AI console or API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A single-frame input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, you may also want to review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk for video frame labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).

Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

Use one of the following sections to learn how to create a labeling job using the console or an API. 

### Create a Labeling Job (Console)
<a name="sms-point-cloud-object-detection-create-labeling-job-console"></a>

You can follow the instructions [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) in order to learn how to create a 3D point cloud object detection labeling job in the SageMaker AI console. While you are creating your labeling job, be aware of the following: 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ Optionally, you can provide label category and frame attributes. Workers can assign one or more of these attributes to annotations to provide more information about that object. For example, you might want to use the attribute *occluded* to have workers identify when an object is partially obstructed.
+ Automated data labeling and annotation consolidation are not supported for 3D point cloud labeling tasks.
+ 3D point cloud object detection labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs when you select your work team (up to 7 days, or 604800 seconds). 

### Create a Labeling Job (API)
<a name="sms-point-cloud-object-detection-create-labeling-job-api"></a>

This section covers details you need to know when you create a labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md), provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudObjectDetection`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ You specify your labels, label category and frame attributes, and worker instructions in a label category configuration file. To learn how to create this file, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md). 
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudObjectDetection`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:ACS-3DPointCloudObjectDetection`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` must be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`.
+ 3D point cloud object detection labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

## Create a 3D Point Cloud Object Detection Adjustment or Verification Labeling Job
<a name="sms-point-cloud-object-detection-adjustment-verification"></a>

You can create an adjustment or verification labeling job using the Ground Truth console or `CreateLabelingJob` API. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

When you create an adjustment labeling job, your input data to the labeling job can include labels, and yaw, pitch, and roll measurements from a previous labeling job or external source. In the adjustment job, pitch, and roll will be visualized in the worker UI, but cannot be modified. Yaw is adjustable. 

Ground Truth uses Tait-Bryan angles with the following intrinsic rotations to visualize yaw, pitch and roll in the worker UI. First, rotation is applied to the vehicle according to the z-axis (yaw). Next, the rotated vehicle is rotated according to the intrinsic y'-axis (pitch). Finally, the vehicle is rotated according to the intrinsic x''-axis (roll). 

## Output Data Format
<a name="sms-point-cloud-object-detection-output-data"></a>

When you create a 3D point cloud object detection labeling job, tasks are sent to workers. When these workers complete their tasks, labels are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D point cloud object detection output data format, see [3D point cloud object detection output](sms-data-output.md#sms-output-point-cloud-object-detection). 

# Understand the 3D point cloud object tracking task type
<a name="sms-point-cloud-object-tracking"></a>

Use this task type when you want workers to add and fit 3D cuboids around objects to track their movement across 3D point cloud frames. For example, you can use this task type to ask workers to track the movement of vehicles across multiple point cloud frames. 

For this task type, the data object that workers label is a sequence of point cloud frames. A *sequence* is defined as a temporal series of point cloud frames. Ground Truth renders a series of 3D point cloud visualizations using a sequence you provide and workers can switch between these 3D point cloud frames in the worker task interface. 

Ground Truth provides workers with tools to annotate objects with 9 degrees of freedom (x,y,z,rx,ry,rz,l,w,h) in three dimensions in both 3D scene and projected side views (top, side, and back). When a worker draws a cuboid around an object, that cuboid is given a unique ID, for example `Car:1` for one car in the sequence and `Car:2` for another. Workers use that ID to label the same object in multiple frames.

You can also provide camera data to give workers more visual information about scenes in the frame, and to help workers draw 3D cuboids around objects. When a worker adds a 3D cuboid to identify an object in either the 2D image or the 3D point cloud, and the cuboid shows up in the other view. 

You can adjust annotations created in a 3D point cloud object detection labeling job using the 3D point cloud object tracking adjustment task type. 

If you are a new user of the Ground Truth 3D point cloud labeling modality, we recommend you review [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md). This labeling modality is different from other Ground Truth task types, and this page provides an overview of important details you should be aware of when creating a 3D point cloud labeling job.

The following topics explain how to create a 3D point cloud object tracking job, show what the worker task interface looks like (what workers see when they work on this task), and provide an overview of the output data you get when workers complete their tasks. The final topic provides useful information for creating object tracking adjustment or verification labeling jobs.

**Topics**
+ [Create a 3D point cloud object tracking labeling job](sms-point-cloud-object-tracking-create-labeling-job.md)
+ [View the worker task interface for a 3D point cloud object tracking task](sms-point-cloud-object-tracking-worker-ui.md)
+ [Output data for a 3D point cloud object tracking labeling job](sms-point-cloud-object-tracking-output-data.md)
+ [Information for creating a 3D point cloud object tracking adjustment or verification labeling job](sms-point-cloud-object-tracking-adjustment-verification.md)

# Create a 3D point cloud object tracking labeling job
<a name="sms-point-cloud-object-tracking-create-labeling-job"></a>

You can create a 3D point cloud labeling job using the SageMaker AI console or API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A sequence input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, we recommend that you review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk for 3D point cloud labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).

Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

To learn how to create a labeling job using the console or an API, see the following sections. 

## Create a labeling job (console)
<a name="sms-point-cloud-object-tracking-create-labeling-job-console"></a>

You can follow the instructions [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) in order to learn how to create a 3D point cloud object tracking labeling job in the SageMaker AI console. While you are creating your labeling job, be aware of the following: 
+ Your input manifest file must be a sequence manifest file. For more information, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). 
+ Optionally, you can provide label category attributes. Workers can assign one or more of these attributes to annotations to provide more information about that object. For example, you might want to use the attribute *occluded* to have workers identify when an object is partially obstructed.
+ Automated data labeling and annotation consolidation are not supported for 3D point cloud labeling tasks. 
+ 3D point cloud object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs when you select your work team (up to 7 days, or 604800 seconds). 

## Create a labeling job (API)
<a name="sms-point-cloud-object-tracking-create-labeling-job-api"></a>

This section covers details you need to know when you create a labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md) provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudObjectTracking`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `ot-labels-ref`. 
+ Your input manifest file must be a point cloud frame sequence manifest file. For more information, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). 
+ You specify your labels, label category and frame attributes, and worker instructions in a label category configuration file. For more information, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md) to learn how to create this file. 
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `PRE-3DPointCloudObjectTracking`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `ACS-3DPointCloudObjectTracking`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` should be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ 3D point cloud object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

# View the worker task interface for a 3D point cloud object tracking task
<a name="sms-point-cloud-object-tracking-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D point cloud object tracking annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth UI in the `HumanTaskUiArn` parameter. When you create a labeling job using this task type in the console, this UI is automatically used. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new use, it is recommended that you create a labeling job using the console to ensure your label attributes, point cloud frames, and if applicable, images, appear as expected. 

The following is a GIF of the 3D point cloud object tracking worker task interface and demonstrates how the worker can navigate the point cloud frames in the sequence. The annotating tools are a part of the worker task interface. They are not available for the preview interface. 

![\[Gif showing how the worker can navigate the point cloud frames in the sequence.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/nav_frames.gif)


Once workers add a single cuboid, that cuboid is replicated in all frames of the sequence with the same ID. Once workers adjust the cuboid in another frame, Ground Truth will interpolate the movement of that object and adjust all cuboids between the manually adjusted frames. The following GIF demonstrates this interpolation feature. In the navigation bar on the bottom-left, red-areas indicate manually adjusted frames. 

![\[Gif showing how the location of a cuboid is inferred in in-between frames.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/label-interpolation.gif)


If you provide camera data for sensor fusion, images are matched up with scenes in point cloud frames. These images appear in the worker portal as shown in the following GIF. 

Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once a worker places a cuboids in the 3D scene, a side-view will appear with the three projected side views: top, side, and back. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse. 

The following video demonstrates movements around the 3D point cloud and in the side-view. 

![\[Gif showing movements around the 3D point cloud showing a street scene.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/nav_general_UI.gif)


Additional view options and features are available. See the [worker instruction page](https://docs.aws.amazon.com//sagemaker/latest/dg/sms-point-cloud-worker-instructions-object-tracking.html) for a comprehensive overview of the Worker UI. 

## Worker tools
<a name="sms-point-cloud-object-tracking-worker-tools"></a>

Workers can navigate through the 3D point cloud by zooming in and out, and moving in all directions around the cloud using the mouse and keyboard shortcuts. If workers click on a point in the point cloud, the UI will automatically zoom into that area. Workers can use various tools to draw 3D cuboid around objects. For more information, see **Assistive Labeling Tools**. 

After workers have placed a 3D cuboid in the point cloud, they can adjust these cuboids to fit tightly around cars using a variety of views: directly in the 3D cuboid, in a side-view featuring three zoomed-in perspectives of the point cloud around the box, and if you include images for sensor fusion, directly in the 2D image. 

View options that enable workers to easily hide or view label text, a ground mesh, and additional point attributes. Workers can also choose between perspective and orthogonal projections. 

**Assistive Labeling Tools**  
Ground Truth helps workers annotate 3D point clouds faster and more accurately using UX, machine learning and computer vision powered assistive labeling tools for 3D point cloud object tracking tasks. The following assistive labeling tools are available for this task type:
+ **Label autofill** – When a worker adds a cuboid to a frame, a cuboid with the same dimensions and orientation is automatically added to all frames in the sequence. 
+ **Label interpolation** – After a worker has labeled a single object in two frames, Ground Truth uses those annotations to interpolate the movement of that object between those two frames. Label interpolation can be turned on and off.
+ **Bulk label and attribute management** – Workers can add, delete, and rename annotations, label category attributes, and frame attributes in bulk. 
  + Workers can manually delete annotations for a given object before or after a frame. For example, a worker can delete all labels for an object after frame 10 if that object is no longer located in the scene after that frame. 
  + If a worker accidentally bulk deletes all annotations for a object, they can add them back. For example, if a worker deletes all annotations for an object before frame 100, they can bulk add them to those frames. 
  + Workers can rename a label in one frame and all 3D cuboids assigned that label are updated with the new name across all frames. 
  + Workers can use bulk editing to add or edit label category attributes and frame attributes in multiple frames.
+ **Snapping** – Workers can add a cuboid around an object and use a keyboard shortcut or menu option to have Ground Truth's autofit tool snap the cuboid tightly around the object's boundaries. 
+ **Fit to ground** – After a worker adds a cuboid to the 3D scene, the worker can automatically snap the cuboid to the ground. For example, the worker can use this feature to snap a cuboid to the road or sidewalk in the scene. 
+ **Multi-view labeling** – After a worker adds a 3D cuboid to the 3D scene, a side -panel displays front and two side perspectives to help the worker adjust the cuboid tightly around the object. Workers can annotation the 3D point cloud, the side panel and the adjustments appear in the other views in real time. 
+ **Sensor fusion** – If you provide data for sensor fusion, workers can adjust annotations in the 3D scenes and in 2D images, and the annotations will be projected into the other view in real time. 
+ **Auto-merge cuboids **– Workers can automatically merge two cuboids across all frames if they determine that cuboids with different labels actually represent a single object. 
+ **View options **– Enables workers to easily hide or view label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

# Output data for a 3D point cloud object tracking labeling job
<a name="sms-point-cloud-object-tracking-output-data"></a>

When you create a 3D point cloud object tracking labeling job, tasks are sent to workers. When these workers complete their tasks, their annotations are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D point cloud object tracking output data format, see [3D point cloud object tracking output](sms-data-output.md#sms-output-point-cloud-object-tracking). 

# Information for creating a 3D point cloud object tracking adjustment or verification labeling job
<a name="sms-point-cloud-object-tracking-adjustment-verification"></a>

You can create an adjustment and verification labeling job using the Ground Truth console or `CreateLabelingJob` API. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

When you create an adjustment labeling job, your input data to the labeling job can include labels, and yaw, pitch, and roll measurements from a previous labeling job or external source. In the adjustment job, pitch, and roll will be visualized in the worker UI, but cannot be modified. Yaw is adjustable. 

Ground Truth uses Tait-Bryan angles with the following intrinsic rotations to visualize yaw, pitch and roll in the worker UI. First, rotation is applied to the vehicle according to the z-axis (yaw). Next, the rotated vehicle is rotated according to the intrinsic y'-axis (pitch). Finally, the vehicle is rotated according to the intrinsic x''-axis (roll). 

# Understand the 3D point cloud semantic segmentation task type
<a name="sms-point-cloud-semantic-segmentation"></a>

Semantic segmentation involves classifying individual points of a 3D point cloud into pre-specified categories. Use this task type when you want workers to create a point-level semantic segmentation mask for 3D point clouds. For example, if you specify the classes `car`, `pedestrian`, and `bike`, workers select one class at a time, and color all of the points that this class applies to the same color in the point cloud. 

For this task type, the data object that workers label is a single point cloud frame. Ground Truth generates a 3D point cloud visualization using point cloud data you provide. You can also provide camera data to give workers more visual information about scenes in the frame, and to help workers paint objects. When a worker paints an object in either the 2D image or the 3D point cloud, the paint shows up in the other view. 

You can also adjust or verify annotations created in a 3D point cloud object detection labeling job using the 3D point cloud semantic segmentation adjustment or labeling task type. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

If you are a new user of the Ground Truth 3D point cloud labeling modality, we recommend you review [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md). This labeling modality is different from other Ground Truth task types, and this topic provides an overview of important details you should be aware of when creating a 3D point cloud labeling job.

The following topics explain how to create a 3D point cloud semantic segmentation job, show what the worker task interface looks like (what workers see when they work on this task), and provide an overview of the output data you get when workers complete their tasks.

**Topics**
+ [Create a 3D point cloud semantic segmentation labeling job](sms-point-cloud-semantic-segmentation-create-labeling-job.md)
+ [View the worker task interface for a 3D point cloud semantic segmentation job](sms-point-cloud-semantic-segmentation-worker-ui.md)
+ [Output data for a 3D point cloud semantic segmentation job](sms-point-cloud-semantic-segmentation-input-data.md)

# Create a 3D point cloud semantic segmentation labeling job
<a name="sms-point-cloud-semantic-segmentation-create-labeling-job"></a>

You can create a 3D point cloud labeling job using the SageMaker AI console or API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A single-frame input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, we recommend that you review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk workers for 3D point cloud labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).
+ A label category configuration file. For more information, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md). 

Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

Use one of the following sections to learn how to create a labeling job using the console or an API. 

## Create a labeling job (console)
<a name="sms-point-cloud-semantic-segmentation-console"></a>

You can follow the instructions [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) in order to learn how to create a 3D point cloud semantic segmentation labeling job in the SageMaker AI console. While you are creating your labeling job, be aware of the following: 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ Automated data labeling and annotation consolidation are not supported for 3D point cloud labeling tasks. 
+ 3D point cloud semantic segmentation labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs when you select your work team (up to 7 days, or 604800 seconds). 

## Create a labeling job (API)
<a name="sms-point-cloud-semantic-segmentation-api"></a>

This section covers details you need to know when you create a labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

The page, [Create a Labeling Job (API)](sms-create-labeling-job-api.md), provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudSemanticSegmentation`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `ss-labels-ref`. 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ You specify your labels and worker instructions in a label category configuration file. See [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md) to learn how to create this file. 
+ You need to provide a pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudSemanticSegmentation`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:ACS-3DPointCloudSemanticSegmentation`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` should be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ 3D point cloud semantic segmentation labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604800 seconds). 

# View the worker task interface for a 3D point cloud semantic segmentation job
<a name="sms-point-cloud-semantic-segmentation-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D point cloud semantic segmentation annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth UI in the `HumanTaskUiArn` parameter. When you create a labeling job using this task type in the console, this UI is automatically used. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new use, it is recommended that you create a labeling job using the console to ensure your label attributes, point cloud frames, and if applicable, images, appear as expected. 

The following is a GIF of the 3D point cloud semantic segmentation worker task interface. If you provide camera data for sensor fusion, images are matched with scenes in the point cloud frame. Workers can paint objects in either the 3D point cloud or the 2D image, and the paint appears in the corresponding location in the other medium. These images appear in the worker portal as shown in the following GIF. 

![\[Gif showing how workers can use the 3D point cloud and 2D image together to paint objects.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss_paint_sf.gif)


Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

The following video demonstrates movements around the 3D point cloud. Workers can hide and re-expand all side views and menus. In this GIF, the side-views and menus have been collapsed. 

![\[Gif showing how workers can move around the 3D point cloud.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss_nav_worker_portal.gif)


The following GIF demonstrates how a worker can label multiple objects quickly, refine painted objects using the Unpaint option and then view only points that have been painted. 

![\[Gif showing how a worker can label multiple objects.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss-view-options.gif)


Additional view options and features are available. See the [worker instruction page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-worker-instructions-semantic-segmentation.html) for a comprehensive overview of the Worker UI. 

**Worker Tools**  
Workers can navigate through the 3D point cloud by zooming in and out, and moving in all directions around the cloud using the mouse and keyboard shortcuts. When you create a semantic segmentation job, workers have the following tools available to them: 
+ A paint brush to paint and unpaint objects. Workers paint objects by selecting a label category and then painting in the 3D point cloud. Workers unpaint objects by selecting the Unpaint option from the label category menu and using the paint brush to erase paint. 
+ A polygon tool that workers can use to select and paint an area in the point cloud. 
+ A background paint tool, which enables workers to paint behind objects they have already annotated without altering the original annotations. For example, workers might use this tool to paint the road after painting all of the cars on the road. 
+ View options that enable workers to easily hide or view label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

# Output data for a 3D point cloud semantic segmentation job
<a name="sms-point-cloud-semantic-segmentation-input-data"></a>

When you create a 3D point cloud semantic segmentation labeling job, tasks are sent to workers. When these workers complete their tasks, their annotations are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D point cloud object detection output data format, see [3D point cloud semantic segmentation output](sms-data-output.md#sms-output-point-cloud-segmentation). 

# Understand the 3D-2D point cloud object tracking task type
<a name="sms-point-cloud-3d-2d-object-tracking"></a>

Use this task type when you want workers to link 3D point cloud annotations with 2D images annotations and also link 2D image annotations among various cameras. Currently, Ground Truth supports cuboids for annotation in a 3D point cloud and bounding boxes for annotation in 2D videos. For example, you can use this task type to ask workers to link the movement of a vehicle in 3D point cloud with its 2D video. Using 3D-2D linking, you can easily correlate point cloud data (like the distance of a cuboid) to video data (bounding box) for up to 8 cameras.

 Ground Truth provides workers with tools to annotate cuboids in a 3D point cloud and bounding boxes in up to 8 cameras using the same annotation UI. Workers can also link various bounding boxes for the same object across different cameras. For example, a bounding box in camera1 can be linked to a bounding box in camera2. This lets you to correlate an object across multiple cameras using a unique ID. 

**Note**  
Currently, SageMaker AI does not support creating a 3D-2D linking job using the console. To create a 3D-2D linking job using the SageMaker API, see [Create a labeling job (API)](sms-3d-2d-point-cloud-object-tracking-create-labeling-job.md#sms-point-cloud-3d-2d-object-tracking-create-labeling-job-api). 

The following topics explain how to create a 3D-2D point cloud object tracking labeling job, show what the worker task interface looks like (what workers see when they work on this task), and provide an overview of the output data you get when workers complete their tasks.

**Topics**
+ [Create a 3D-2D point cloud object tracking labeling job](sms-3d-2d-point-cloud-object-tracking-create-labeling-job.md)
+ [View the worker task interface for a 3D-2D object tracking labeling job](sms-point-cloud-3d-2d-object-tracking-worker-ui.md)
+ [Output data for a 3D-2D object tracking labeling job](sms-point-cloud-3d-2d-object-tracking-output-data.md)

# Create a 3D-2D point cloud object tracking labeling job
<a name="sms-3d-2d-point-cloud-object-tracking-create-labeling-job"></a>

You can create a 3D-2D point cloud labeling job using the SageMaker API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk for 3D point cloud labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).
+ Add a CORS policy to an S3 bucket that contains input data in the Amazon S3 console. To set the required CORS headers on the S3 bucket that contains your input images in the S3 console, follow the directions detailed in [CORS Permission Requirement](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-cors-update.html).
+ Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

To learn how to create a labeling job using the API, see the following sections. 

## Create a labeling job (API)
<a name="sms-point-cloud-3d-2d-object-tracking-create-labeling-job-api"></a>

This section covers details you need to know when you create a 3D-2D object tracking labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md) provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudObjectTracking`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `ot-labels-ref`. 
+ Your input manifest file must be a point cloud frame sequence manifest file. For more information, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). You also need to provide a label category configuration file as mentioned above.
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `PRE-3DPointCloudObjectTracking`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `ACS-3DPointCloudObjectTracking`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` should be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ 3D-2D object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

**Note**  
After you have successfully created a 3D-2D object tracking job, it shows up on the console under labeling jobs. The task type for the job is displayed as **Point Cloud Object Tracking**.

## Input data format
<a name="sms-point-cloud-3d-2d-object-tracking-input-data"></a>

You can create a 3D-2D object tracking job using the SageMaker API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following:
+ A sequence input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, we recommend that you review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ You specify your labels, label category and frame attributes, and worker instructions in a label category configuration file. For more information, see [Create a Labeling Category Configuration File with Label Category and Frame Attributes](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-label-cat-config-attributes.html) to learn how to create this file. The following is an example showing a label category configuration file for creating a 3D-2D object tracking job.

  ```
  {
      "document-version": "2020-03-01",
      "categoryGlobalAttributes": [
          {
              "name": "Occlusion",
              "description": "global attribute that applies to all label categories",
              "type": "string",
              "enum":[
                  "Partial",
                  "Full"
              ]
          }
      ],
      "labels":[
          {
              "label": "Car",
              "attributes": [
                  {
                      "name": "Type",
                      "type": "string",
                      "enum": [
                          "SUV",
                          "Sedan"
                      ]
                  } 
              ]
          },
          {
              "label": "Bus",
              "attributes": [
                  {
                      "name": "Size",
                      "type": "string",
                      "enum": [
                          "Large",
                          "Medium",
                          "Small"
                      ]
                  }
              ]
          }
      ],
      "instructions": {
          "shortIntroduction": "Draw a tight cuboid around objects after you select a category.",
          "fullIntroduction": "<p>Use this area to add more detailed worker instructions.</p>"
      },
      "annotationType": [
          {
              "type": "BoundingBox"
          },
          {
              "type": "Cuboid"
          }
      ]
  }
  ```
**Note**  
You need to provide `BoundingBox` and `Cuboid` as annotationType in the label category configuration file to create a 3D-2D object tracking job. 

# View the worker task interface for a 3D-2D object tracking labeling job
<a name="sms-point-cloud-3d-2d-object-tracking-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D-2D object tracking annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth UI in the `HumanTaskUiArn` parameter. To use the UI when you create a labeling job for this task type using the API, you need to provide the `HumanTaskUiArn`. You can preview and interact with the worker UI when you create a labeling job through the API. The annotating tools are a part of the worker task interface. They are not available for the preview interface. The following image demonstrates the worker task interface used for the 3D-2D point cloud object tracking annotation task.

![\[The worker task interface used for the 3D-2D point cloud object tracking annotation task.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms-sensor-fusion.png)


When interpolation is enabled by default. After a worker adds a single cuboid, that cuboid is replicated in all frames of the sequence with the same ID. If the worker adjusts the cuboid in another frame, Ground Truth interpolates the movement of that object and adjust all cuboids between the manually adjusted frames. Additionally, using the camera view section, a cuboid can be shown with a projection (using to B button for "toggle labels" in the camera view) that provides the worker with a reference from the camera images. The accuracy of the cuboid to image projection is based on accuracy of calibrations captured in the extrinsic and intrinsinc data.

If you provide camera data for sensor fusion, images are matched up with scenes in point cloud frames. Note that the camera data should be time synchronized with the point cloud data to ensure an accurate depiction of point cloud to imagery over each frame in the sequence as shown in the following image.

![\[The manifest file, the worker portal with point cloud data and the camera data.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/3d_2d_link_ss.png)


The manifest file holds the extrinsic and intrinsic data and the pose to allow the cuboid projection on the camera image to be shown by using the **P button**.

Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once a worker places a cuboids in the 3D scene, a side-view appears with the three projected side views: top, side, and front. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse.

The worker should first select the cuboid to draw a corresponding bounding box on any of the camera views. This links the cuboid and the bounding box with a common name and unique ID.

The worker can also first draw a bounding box, select it and draw the corresponding cuboid to link them.

Additional view options and features are available. See the [worker instruction page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-worker-instructions-object-tracking.html) for a comprehensive overview of the Worker UI. 

## Worker tools
<a name="sms-point-cloud-object-tracking-worker-tools"></a>

Workers can navigate through the 3D point cloud by zooming in and out, and moving in all directions around the cloud using the mouse and keyboard shortcuts. If workers click on a point in the point cloud, the UI automatically zooms into that area. Workers can use various tools to draw 3D cuboid around objects. For more information, see **Assistive Labeling Tools** in the following discussion. 

After workers have placed a 3D cuboid in the point cloud, they can adjust these cuboids to fit tightly around cars using a variety of views: directly in the 3D point cloud, in a side-view featuring three zoomed-in perspectives of the point cloud around the box, and if you include images for sensor fusion, directly in the 2D image. 

Additional view options enable workers to easily hide or view label text, a ground mesh, and additional point attributes. Workers can also choose between perspective and orthogonal projections. 

**Assistive Labeling Tools**  
Ground Truth helps workers annotate 3D point clouds faster and more accurately using UX, machine learning and computer vision powered assistive labeling tools for 3D point cloud object tracking tasks. The following assistive labeling tools are available for this task type:
+ **Label autofill** – When a worker adds a cuboid to a frame, a cuboid with the same dimensions, orientation and xyz position is automatically added to all frames in the sequence. 
+ **Label interpolation** – After a worker has labeled a single object in two frames, Ground Truth uses those annotations to interpolate the movement of that object between all the frames. Label interpolation can be turned on and off. It is on by default. For example, if a worker working with 5 frames adds a cuboid in frame 2, it is copied to all the 5 frames. If the worker then makes adjustments in frame 4, frame 2 and 4 now act as two points, through which a line is fit. The cuboid is then interpolated in frames 1,3 and 5.
+ **Bulk label and attribute management** – Workers can add, delete, and rename annotations, label category attributes, and frame attributes in bulk.
  + Workers can manually delete annotations for a given object before and after a frame, or in all frames. For example, a worker can delete all labels for an object after frame 10 if that object is no longer located in the scene after that frame. 
  + If a worker accidentally bulk deletes all annotations for a object, they can add them back. For example, if a worker deletes all annotations for an object before frame 100, they can bulk add them to those frames. 
  + Workers can rename a label in one frame and all 3D cuboids assigned that label are updated with the new name across all frames. 
  + Workers can use bulk editing to add or edit label category attributes and frame attributes in multiple frames.
+ **Snapping** – Workers can add a cuboid around an object and use a keyboard shortcut or menu option to have Ground Truth's autofit tool snap the cuboid tightly around the object's boundaries. 
+ **Fit to ground** – After a worker adds a cuboid to the 3D scene, the worker can automatically snap the cuboid to the ground. For example, the worker can use this feature to snap a cuboid to the road or sidewalk in the scene. 
+ **Multi-view labeling** – After a worker adds a 3D cuboid to the 3D scene, a side-panel displays front and two side perspectives to help the worker adjust the cuboid tightly around the object. Workers can annotation the 3D point cloud, the side panel and the adjustments appear in the other views in real time. 
+ **Sensor fusion** – If you provide data for sensor fusion, workers can adjust annotations in the 3D scenes and in 2D images, and the annotations are projected into the other view in real time. To learn more about the data for sensor fusion, see [Understand Coordinate Systems and Sensor Fusion](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-sensor-fusion-details.html#sms-point-cloud-sensor-fusion).
+ **Auto-merge cuboids **– Workers can automatically merge two cuboids across all frames if they determine that cuboids with different labels actually represent a single object. 
+ **View options **– Enables workers to easily hide or view label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

# Output data for a 3D-2D object tracking labeling job
<a name="sms-point-cloud-3d-2d-object-tracking-output-data"></a>

When you create a 3D-2D object tracking labeling job, tasks are sent to workers. When these workers complete their tasks, their annotations are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D-2D point cloud object tracking output data format, see [3D-2D object tracking point cloud object tracking output](sms-data-output.md#sms-output-3d-2d-point-cloud-object-tracking). 