

# Use Ground Truth to Label 3D Point Clouds
<a name="sms-point-cloud"></a>

Create a 3D point cloud labeling job to have workers label objects in 3D point clouds generated from 3D sensors like Light Detection and Ranging (LiDAR) sensors and depth cameras, or generated from 3D reconstruction by stitching images captured by an agent like a drone. 

## 3D Point Clouds
<a name="sms-point-cloud-define"></a>

Point clouds are made up of three-dimensional (3D) visual data that consists of points. Each point is described using three coordinates, typically `x`, `y`, and `z`. To add color or variations in point intensity to the point cloud, points may be described with additional attributes, such as `i` for intensity or values for the red (`r`), green (`g`), and blue (`b`) 8-bit color channels. When you create a Ground Truth 3D point cloud labeling job, you can provide point cloud and, optionally, sensor fusion data. 

The following image shows a single, 3D point cloud scene rendered by Ground Truth and displayed in the semantic segmentation worker UI.

![\[Gif showing how workers can use the 3D point cloud and 2D image together to paint objects.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss_paint_sf.gif)


### LiDAR
<a name="sms-point-cloud-data-types-lidar"></a>

A Light Detection and Ranging (LiDAR) sensor is a common type of sensor used to collect measurements that are used to generate point cloud data. LiDAR is a remote sensing method that uses light in the form of a pulsed laser to measure the distances of objects from the sensor. You can provide 3D point cloud data generated from a LiDAR sensor for a Ground Truth 3D point cloud labeling job using the raw data formats described in [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md).

### Sensor Fusion
<a name="sms-point-cloud-data-types-sensor"></a>

Ground Truth 3D point cloud labeling jobs include a sensor fusion feature that supports video camera sensor fusion for all task types. Some sensors come with multiple LiDAR devices and video cameras that capture images and associate them with a LiDAR frame. To help annotators visually complete your tasks with high confidence, you can use the Ground Truth sensor fusion feature to project annotations (labels) from a 3D point cloud to 2D camera images and vice versa using 3D scanner (such as LiDAR) extrinsic matrix and camera extrinsic and intrinsic matrices. To learn more, see [Sensor Fusion](sms-point-cloud-sensor-fusion-details.md#sms-point-cloud-sensor-fusion).

## Label 3D Point Clouds
<a name="sms-point-cloud-annotation-define"></a>

Ground Truth provides a user interface (UI) and tools that workers use to label or *annotate* 3D point clouds. When you use the object detection or semantic segmentation task types, workers can annotate a single point cloud frame. When you use object tracking, workers annotate a sequence of frames. You can use object tracking to track object movement across all frames in a sequence. 

The following demonstrates how a worker would use the Ground Truth worker portal and tools to annotate a 3D point cloud for an object detection task. For similar visual examples of other task types, see [3D Point Cloud Task types](sms-point-cloud-task-types.md).

![\[Gif showing how a worker can annotate a 3D point cloud in the Ground Truth worker portal.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_detection/ot_basic_tools.gif)


### Assistive Labeling Tools for Point Cloud Annotation
<a name="sms-point-cloud-assistive-labeling-tools"></a>

Ground Truth offers assistive labeling tools to help workers complete your point cloud annotation tasks faster and with more accuracy. For details about assistive labeling tools that are included in the worker UI for each task type, [select a task type](sms-point-cloud-task-types.md) and refer to the **View the Worker Task Interface** section of that page.

## Next Steps
<a name="sms-point-cloud-next-steps-getting-started"></a>

You can create six types of tasks when you use Ground Truth 3D point cloud labeling jobs. Use the topics in [3D Point Cloud Task types](sms-point-cloud-task-types.md) to learn more about these *task types* and to learn how to create a labeling job using the task type of your choice. 

The 3D point cloud labeling job is different from other Ground Truth labeling modalities. Before creating a labeling job, we recommend that you read [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md). Additionally, review input data quotas in [3D Point Cloud and Video Frame Labeling Job Quotas](input-data-limits.md#sms-input-data-quotas-other).

**Important**  
If you use a notebook instance created before June 5th, 2020 to run this notebook, you must stop and restart that notebook instance for the notebook to work. 

**Topics**
+ [3D Point Clouds](#sms-point-cloud-define)
+ [Label 3D Point Clouds](#sms-point-cloud-annotation-define)
+ [Next Steps](#sms-point-cloud-next-steps-getting-started)
+ [3D Point Cloud Task types](sms-point-cloud-task-types.md)
+ [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md)
+ [Worker instructions](sms-point-cloud-worker-instructions.md)

# 3D Point Cloud Task types
<a name="sms-point-cloud-task-types"></a>

You can use Ground Truth 3D point cloud labeling modality for a variety of use cases. The following list briefly describes each 3D point cloud task type. For additional details and instructions on how to create a labeling job using a specific task type, select the task type name to see its task type page. 
+ [3D point cloud object detection](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-object-detection.html) – Use this task type when you want workers to locate and classify objects in a 3D point cloud by adding and fitting 3D cuboids around objects. 
+ [3D point cloud object tracking](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-object-tracking.html) – Use this task type when you want workers to add and fit 3D cuboids around objects to track their movement across a sequence of 3D point cloud frames. For example, you can use this task type to ask workers to track the movement of vehicles across multiple point cloud frames.
+ [3D point cloud semantic segmentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-semantic-segmentation.html) – Use this task type when you want workers to create a point-level semantic segmentation mask by painting objects in a 3D point cloud using different colors where each color is assigned to one of the classes you specify. 
+  3D point cloud adjustment task types – Each of the task types above has an associated *adjustment* task type that you can use to audit and adjust annotations generated from a 3D point cloud labeling job. Refer to the task type page of the associated type to learn how to create an adjustment labeling job for that task. 

# Classify objects in a 3D point cloud with object detection
<a name="sms-point-cloud-object-detection"></a>

Use this task type when you want workers to classify objects in a 3D point cloud by drawing 3D cuboids around objects. For example, you can use this task type to ask workers to identify different types of objects in a point cloud, such as cars, bikes, and pedestrians. The following page gives important information about the labeling job, as well as steps to create one.

For this task type, the *data object* that workers label is a single point cloud frame. Ground Truth renders a 3D point cloud using point cloud data you provide. You can also provide camera data to give workers more visual information about scenes in the frame, and to help workers draw 3D cuboids around objects. 

Ground Truth providers workers with tools to annotate objects with 9 degrees of freedom (x,y,z,rx,ry,rz,l,w,h) in three dimensions in both 3D scene and projected side views (top, side, and back). If you provide sensor fusion information (like camera data), when a worker adds a cuboid to identify an object in the 3D point cloud, the cuboid shows up and can be modified in the 2D images. After a cuboid has been added, all edits made to that cuboid in the 2D or 3D scene are projected into the other view.

You can create a job to adjust annotations created in a 3D point cloud object detection labeling job using the 3D point cloud object detection adjustment task type. 

If you are a new user of the Ground Truth 3D point cloud labeling modality, we recommend you review [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md). This labeling modality is different from other Ground Truth task types, and this page provides an overview of important details you should be aware of when creating a 3D point cloud labeling job.

**Topics**
+ [View the Worker Task Interface](#sms-point-cloud-object-detection-worker-ui)
+ [Create a 3D Point Cloud Object Detection Labeling Job](#sms-point-cloud-object-detection-create-labeling-job)
+ [Create a 3D Point Cloud Object Detection Adjustment or Verification Labeling Job](#sms-point-cloud-object-detection-adjustment-verification)
+ [Output Data Format](#sms-point-cloud-object-detection-output-data)

## View the Worker Task Interface
<a name="sms-point-cloud-object-detection-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D point cloud object detection annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth worker UI in the `HumanTaskUiArn` parameter. When you create a labeling job using this task type in the console, this worker UI is automatically used. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new user, it is recommended that you create a labeling job using the console to ensure your label attributes, point cloud frames, and if applicable, images, appear as expected. 

The following is a GIF of the 3D point cloud object detection worker task interface. If you provide camera data for sensor fusion in the world coordinate system, images are matched up with scenes in the point cloud frame. These images appear in the worker portal as shown in the following GIF. 

![\[Gif showing how a worker can annotate a 3D point cloud in the Ground Truth worker portal.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_detection/ot_basic_tools.gif)


Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once a worker places a cuboid in the 3D scene, a side-view will appear with the three projected side views: top, side, and back. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse. 

The following video demonstrates movements around the 3D point cloud and in the side-view. 

![\[Gif showing movements around the 3D point cloud and the side-view.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_detection/navigate_od_worker_ui.gif)


Additional view options and features are available in the **View** menu in the worker UI. See the [worker instruction page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-worker-instructions-object-detection) for a comprehensive overview of the Worker UI. 

**Assistive Labeling Tools**  
Ground Truth helps workers annotate 3D point clouds faster and more accurately using machine learning and computer vision powered assistive labeling tools for 3D point cloud object tracking tasks. The following assistive labeling tools are available for this task type:
+ **Snapping** – Workers can add a cuboid around an object and use a keyboard shortcut or menu option to have Ground Truth's autofit tool snap the cuboid tightly around the object. 
+ **Set to ground **– After a worker adds a cuboid to the 3D scene, the worker can automatically snap the cuboid to the ground. For example, the worker can use this feature to snap a cuboid to the road or sidewalk in the scene. 
+ **Multi-view labeling** – After a worker adds a 3D cuboid to the 3D scene, a side panel displays front, side, and top perspectives to help the worker adjust the cuboid tightly around the object. In all of these views, the cuboid includes an arrow that indicates the orientation, or heading of the object. When the worker adjusts the cuboid, the adjustment will appear in real time on all of the views (that is, 3D, top, side, and front). 
+ **Sensor fusion** – If you provide data for sensor fusion, workers can adjust annotations in the 3D scenes and in 2D images, and the annotations will be projected into the other view in real time. Additionally, workers will have the option to view the direction the camera is facing and the camera frustum.
+ **View options **– Enables workers to easily hide or view cuboids, label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

## Create a 3D Point Cloud Object Detection Labeling Job
<a name="sms-point-cloud-object-detection-create-labeling-job"></a>

You can create a 3D point cloud labeling job using the SageMaker AI console or API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A single-frame input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, you may also want to review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk for video frame labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).

Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

Use one of the following sections to learn how to create a labeling job using the console or an API. 

### Create a Labeling Job (Console)
<a name="sms-point-cloud-object-detection-create-labeling-job-console"></a>

You can follow the instructions [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) in order to learn how to create a 3D point cloud object detection labeling job in the SageMaker AI console. While you are creating your labeling job, be aware of the following: 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ Optionally, you can provide label category and frame attributes. Workers can assign one or more of these attributes to annotations to provide more information about that object. For example, you might want to use the attribute *occluded* to have workers identify when an object is partially obstructed.
+ Automated data labeling and annotation consolidation are not supported for 3D point cloud labeling tasks.
+ 3D point cloud object detection labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs when you select your work team (up to 7 days, or 604800 seconds). 

### Create a Labeling Job (API)
<a name="sms-point-cloud-object-detection-create-labeling-job-api"></a>

This section covers details you need to know when you create a labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md), provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudObjectDetection`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ You specify your labels, label category and frame attributes, and worker instructions in a label category configuration file. To learn how to create this file, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md). 
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudObjectDetection`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:ACS-3DPointCloudObjectDetection`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` must be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`.
+ 3D point cloud object detection labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

## Create a 3D Point Cloud Object Detection Adjustment or Verification Labeling Job
<a name="sms-point-cloud-object-detection-adjustment-verification"></a>

You can create an adjustment or verification labeling job using the Ground Truth console or `CreateLabelingJob` API. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

When you create an adjustment labeling job, your input data to the labeling job can include labels, and yaw, pitch, and roll measurements from a previous labeling job or external source. In the adjustment job, pitch, and roll will be visualized in the worker UI, but cannot be modified. Yaw is adjustable. 

Ground Truth uses Tait-Bryan angles with the following intrinsic rotations to visualize yaw, pitch and roll in the worker UI. First, rotation is applied to the vehicle according to the z-axis (yaw). Next, the rotated vehicle is rotated according to the intrinsic y'-axis (pitch). Finally, the vehicle is rotated according to the intrinsic x''-axis (roll). 

## Output Data Format
<a name="sms-point-cloud-object-detection-output-data"></a>

When you create a 3D point cloud object detection labeling job, tasks are sent to workers. When these workers complete their tasks, labels are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D point cloud object detection output data format, see [3D point cloud object detection output](sms-data-output.md#sms-output-point-cloud-object-detection). 

# Understand the 3D point cloud object tracking task type
<a name="sms-point-cloud-object-tracking"></a>

Use this task type when you want workers to add and fit 3D cuboids around objects to track their movement across 3D point cloud frames. For example, you can use this task type to ask workers to track the movement of vehicles across multiple point cloud frames. 

For this task type, the data object that workers label is a sequence of point cloud frames. A *sequence* is defined as a temporal series of point cloud frames. Ground Truth renders a series of 3D point cloud visualizations using a sequence you provide and workers can switch between these 3D point cloud frames in the worker task interface. 

Ground Truth provides workers with tools to annotate objects with 9 degrees of freedom (x,y,z,rx,ry,rz,l,w,h) in three dimensions in both 3D scene and projected side views (top, side, and back). When a worker draws a cuboid around an object, that cuboid is given a unique ID, for example `Car:1` for one car in the sequence and `Car:2` for another. Workers use that ID to label the same object in multiple frames.

You can also provide camera data to give workers more visual information about scenes in the frame, and to help workers draw 3D cuboids around objects. When a worker adds a 3D cuboid to identify an object in either the 2D image or the 3D point cloud, and the cuboid shows up in the other view. 

You can adjust annotations created in a 3D point cloud object detection labeling job using the 3D point cloud object tracking adjustment task type. 

If you are a new user of the Ground Truth 3D point cloud labeling modality, we recommend you review [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md). This labeling modality is different from other Ground Truth task types, and this page provides an overview of important details you should be aware of when creating a 3D point cloud labeling job.

The following topics explain how to create a 3D point cloud object tracking job, show what the worker task interface looks like (what workers see when they work on this task), and provide an overview of the output data you get when workers complete their tasks. The final topic provides useful information for creating object tracking adjustment or verification labeling jobs.

**Topics**
+ [Create a 3D point cloud object tracking labeling job](sms-point-cloud-object-tracking-create-labeling-job.md)
+ [View the worker task interface for a 3D point cloud object tracking task](sms-point-cloud-object-tracking-worker-ui.md)
+ [Output data for a 3D point cloud object tracking labeling job](sms-point-cloud-object-tracking-output-data.md)
+ [Information for creating a 3D point cloud object tracking adjustment or verification labeling job](sms-point-cloud-object-tracking-adjustment-verification.md)

# Create a 3D point cloud object tracking labeling job
<a name="sms-point-cloud-object-tracking-create-labeling-job"></a>

You can create a 3D point cloud labeling job using the SageMaker AI console or API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A sequence input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, we recommend that you review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk for 3D point cloud labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).

Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

To learn how to create a labeling job using the console or an API, see the following sections. 

## Create a labeling job (console)
<a name="sms-point-cloud-object-tracking-create-labeling-job-console"></a>

You can follow the instructions [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) in order to learn how to create a 3D point cloud object tracking labeling job in the SageMaker AI console. While you are creating your labeling job, be aware of the following: 
+ Your input manifest file must be a sequence manifest file. For more information, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). 
+ Optionally, you can provide label category attributes. Workers can assign one or more of these attributes to annotations to provide more information about that object. For example, you might want to use the attribute *occluded* to have workers identify when an object is partially obstructed.
+ Automated data labeling and annotation consolidation are not supported for 3D point cloud labeling tasks. 
+ 3D point cloud object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs when you select your work team (up to 7 days, or 604800 seconds). 

## Create a labeling job (API)
<a name="sms-point-cloud-object-tracking-create-labeling-job-api"></a>

This section covers details you need to know when you create a labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md) provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudObjectTracking`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `ot-labels-ref`. 
+ Your input manifest file must be a point cloud frame sequence manifest file. For more information, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). 
+ You specify your labels, label category and frame attributes, and worker instructions in a label category configuration file. For more information, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md) to learn how to create this file. 
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `PRE-3DPointCloudObjectTracking`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `ACS-3DPointCloudObjectTracking`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` should be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ 3D point cloud object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

# View the worker task interface for a 3D point cloud object tracking task
<a name="sms-point-cloud-object-tracking-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D point cloud object tracking annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth UI in the `HumanTaskUiArn` parameter. When you create a labeling job using this task type in the console, this UI is automatically used. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new use, it is recommended that you create a labeling job using the console to ensure your label attributes, point cloud frames, and if applicable, images, appear as expected. 

The following is a GIF of the 3D point cloud object tracking worker task interface and demonstrates how the worker can navigate the point cloud frames in the sequence. The annotating tools are a part of the worker task interface. They are not available for the preview interface. 

![\[Gif showing how the worker can navigate the point cloud frames in the sequence.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/nav_frames.gif)


Once workers add a single cuboid, that cuboid is replicated in all frames of the sequence with the same ID. Once workers adjust the cuboid in another frame, Ground Truth will interpolate the movement of that object and adjust all cuboids between the manually adjusted frames. The following GIF demonstrates this interpolation feature. In the navigation bar on the bottom-left, red-areas indicate manually adjusted frames. 

![\[Gif showing how the location of a cuboid is inferred in in-between frames.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/label-interpolation.gif)


If you provide camera data for sensor fusion, images are matched up with scenes in point cloud frames. These images appear in the worker portal as shown in the following GIF. 

Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once a worker places a cuboids in the 3D scene, a side-view will appear with the three projected side views: top, side, and back. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse. 

The following video demonstrates movements around the 3D point cloud and in the side-view. 

![\[Gif showing movements around the 3D point cloud showing a street scene.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/nav_general_UI.gif)


Additional view options and features are available. See the [worker instruction page](https://docs.aws.amazon.com//sagemaker/latest/dg/sms-point-cloud-worker-instructions-object-tracking.html) for a comprehensive overview of the Worker UI. 

## Worker tools
<a name="sms-point-cloud-object-tracking-worker-tools"></a>

Workers can navigate through the 3D point cloud by zooming in and out, and moving in all directions around the cloud using the mouse and keyboard shortcuts. If workers click on a point in the point cloud, the UI will automatically zoom into that area. Workers can use various tools to draw 3D cuboid around objects. For more information, see **Assistive Labeling Tools**. 

After workers have placed a 3D cuboid in the point cloud, they can adjust these cuboids to fit tightly around cars using a variety of views: directly in the 3D cuboid, in a side-view featuring three zoomed-in perspectives of the point cloud around the box, and if you include images for sensor fusion, directly in the 2D image. 

View options that enable workers to easily hide or view label text, a ground mesh, and additional point attributes. Workers can also choose between perspective and orthogonal projections. 

**Assistive Labeling Tools**  
Ground Truth helps workers annotate 3D point clouds faster and more accurately using UX, machine learning and computer vision powered assistive labeling tools for 3D point cloud object tracking tasks. The following assistive labeling tools are available for this task type:
+ **Label autofill** – When a worker adds a cuboid to a frame, a cuboid with the same dimensions and orientation is automatically added to all frames in the sequence. 
+ **Label interpolation** – After a worker has labeled a single object in two frames, Ground Truth uses those annotations to interpolate the movement of that object between those two frames. Label interpolation can be turned on and off.
+ **Bulk label and attribute management** – Workers can add, delete, and rename annotations, label category attributes, and frame attributes in bulk. 
  + Workers can manually delete annotations for a given object before or after a frame. For example, a worker can delete all labels for an object after frame 10 if that object is no longer located in the scene after that frame. 
  + If a worker accidentally bulk deletes all annotations for a object, they can add them back. For example, if a worker deletes all annotations for an object before frame 100, they can bulk add them to those frames. 
  + Workers can rename a label in one frame and all 3D cuboids assigned that label are updated with the new name across all frames. 
  + Workers can use bulk editing to add or edit label category attributes and frame attributes in multiple frames.
+ **Snapping** – Workers can add a cuboid around an object and use a keyboard shortcut or menu option to have Ground Truth's autofit tool snap the cuboid tightly around the object's boundaries. 
+ **Fit to ground** – After a worker adds a cuboid to the 3D scene, the worker can automatically snap the cuboid to the ground. For example, the worker can use this feature to snap a cuboid to the road or sidewalk in the scene. 
+ **Multi-view labeling** – After a worker adds a 3D cuboid to the 3D scene, a side -panel displays front and two side perspectives to help the worker adjust the cuboid tightly around the object. Workers can annotation the 3D point cloud, the side panel and the adjustments appear in the other views in real time. 
+ **Sensor fusion** – If you provide data for sensor fusion, workers can adjust annotations in the 3D scenes and in 2D images, and the annotations will be projected into the other view in real time. 
+ **Auto-merge cuboids **– Workers can automatically merge two cuboids across all frames if they determine that cuboids with different labels actually represent a single object. 
+ **View options **– Enables workers to easily hide or view label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

# Output data for a 3D point cloud object tracking labeling job
<a name="sms-point-cloud-object-tracking-output-data"></a>

When you create a 3D point cloud object tracking labeling job, tasks are sent to workers. When these workers complete their tasks, their annotations are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D point cloud object tracking output data format, see [3D point cloud object tracking output](sms-data-output.md#sms-output-point-cloud-object-tracking). 

# Information for creating a 3D point cloud object tracking adjustment or verification labeling job
<a name="sms-point-cloud-object-tracking-adjustment-verification"></a>

You can create an adjustment and verification labeling job using the Ground Truth console or `CreateLabelingJob` API. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

When you create an adjustment labeling job, your input data to the labeling job can include labels, and yaw, pitch, and roll measurements from a previous labeling job or external source. In the adjustment job, pitch, and roll will be visualized in the worker UI, but cannot be modified. Yaw is adjustable. 

Ground Truth uses Tait-Bryan angles with the following intrinsic rotations to visualize yaw, pitch and roll in the worker UI. First, rotation is applied to the vehicle according to the z-axis (yaw). Next, the rotated vehicle is rotated according to the intrinsic y'-axis (pitch). Finally, the vehicle is rotated according to the intrinsic x''-axis (roll). 

# Understand the 3D point cloud semantic segmentation task type
<a name="sms-point-cloud-semantic-segmentation"></a>

Semantic segmentation involves classifying individual points of a 3D point cloud into pre-specified categories. Use this task type when you want workers to create a point-level semantic segmentation mask for 3D point clouds. For example, if you specify the classes `car`, `pedestrian`, and `bike`, workers select one class at a time, and color all of the points that this class applies to the same color in the point cloud. 

For this task type, the data object that workers label is a single point cloud frame. Ground Truth generates a 3D point cloud visualization using point cloud data you provide. You can also provide camera data to give workers more visual information about scenes in the frame, and to help workers paint objects. When a worker paints an object in either the 2D image or the 3D point cloud, the paint shows up in the other view. 

You can also adjust or verify annotations created in a 3D point cloud object detection labeling job using the 3D point cloud semantic segmentation adjustment or labeling task type. To learn more about adjustment and verification labeling jobs, and to learn how create one, see [Label verification and adjustment](sms-verification-data.md).

If you are a new user of the Ground Truth 3D point cloud labeling modality, we recommend you review [3D point cloud labeling jobs overview](sms-point-cloud-general-information.md). This labeling modality is different from other Ground Truth task types, and this topic provides an overview of important details you should be aware of when creating a 3D point cloud labeling job.

The following topics explain how to create a 3D point cloud semantic segmentation job, show what the worker task interface looks like (what workers see when they work on this task), and provide an overview of the output data you get when workers complete their tasks.

**Topics**
+ [Create a 3D point cloud semantic segmentation labeling job](sms-point-cloud-semantic-segmentation-create-labeling-job.md)
+ [View the worker task interface for a 3D point cloud semantic segmentation job](sms-point-cloud-semantic-segmentation-worker-ui.md)
+ [Output data for a 3D point cloud semantic segmentation job](sms-point-cloud-semantic-segmentation-input-data.md)

# Create a 3D point cloud semantic segmentation labeling job
<a name="sms-point-cloud-semantic-segmentation-create-labeling-job"></a>

You can create a 3D point cloud labeling job using the SageMaker AI console or API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A single-frame input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, we recommend that you review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk workers for 3D point cloud labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).
+ A label category configuration file. For more information, see [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md). 

Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

Use one of the following sections to learn how to create a labeling job using the console or an API. 

## Create a labeling job (console)
<a name="sms-point-cloud-semantic-segmentation-console"></a>

You can follow the instructions [Create a Labeling Job (Console)](sms-create-labeling-job-console.md) in order to learn how to create a 3D point cloud semantic segmentation labeling job in the SageMaker AI console. While you are creating your labeling job, be aware of the following: 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ Automated data labeling and annotation consolidation are not supported for 3D point cloud labeling tasks. 
+ 3D point cloud semantic segmentation labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs when you select your work team (up to 7 days, or 604800 seconds). 

## Create a labeling job (API)
<a name="sms-point-cloud-semantic-segmentation-api"></a>

This section covers details you need to know when you create a labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

The page, [Create a Labeling Job (API)](sms-create-labeling-job-api.md), provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudSemanticSegmentation`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `ss-labels-ref`. 
+ Your input manifest file must be a single-frame manifest file. For more information, see [Create a Point Cloud Frame Input Manifest File](sms-point-cloud-single-frame-input-data.md). 
+ You specify your labels and worker instructions in a label category configuration file. See [Labeling category configuration file with label category and frame attributes reference](sms-label-cat-config-attributes.md) to learn how to create this file. 
+ You need to provide a pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:PRE-3DPointCloudSemanticSegmentation`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN. For example, if you are creating your labeling job in us-east-1, the ARN will be `arn:aws:lambda:us-east-1:432418664414:function:ACS-3DPointCloudSemanticSegmentation`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` should be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ 3D point cloud semantic segmentation labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604800 seconds). 

# View the worker task interface for a 3D point cloud semantic segmentation job
<a name="sms-point-cloud-semantic-segmentation-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D point cloud semantic segmentation annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth UI in the `HumanTaskUiArn` parameter. When you create a labeling job using this task type in the console, this UI is automatically used. You can preview and interact with the worker UI when you create a labeling job in the console. If you are a new use, it is recommended that you create a labeling job using the console to ensure your label attributes, point cloud frames, and if applicable, images, appear as expected. 

The following is a GIF of the 3D point cloud semantic segmentation worker task interface. If you provide camera data for sensor fusion, images are matched with scenes in the point cloud frame. Workers can paint objects in either the 3D point cloud or the 2D image, and the paint appears in the corresponding location in the other medium. These images appear in the worker portal as shown in the following GIF. 

![\[Gif showing how workers can use the 3D point cloud and 2D image together to paint objects.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss_paint_sf.gif)


Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

The following video demonstrates movements around the 3D point cloud. Workers can hide and re-expand all side views and menus. In this GIF, the side-views and menus have been collapsed. 

![\[Gif showing how workers can move around the 3D point cloud.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss_nav_worker_portal.gif)


The following GIF demonstrates how a worker can label multiple objects quickly, refine painted objects using the Unpaint option and then view only points that have been painted. 

![\[Gif showing how a worker can label multiple objects.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss-view-options.gif)


Additional view options and features are available. See the [worker instruction page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-worker-instructions-semantic-segmentation.html) for a comprehensive overview of the Worker UI. 

**Worker Tools**  
Workers can navigate through the 3D point cloud by zooming in and out, and moving in all directions around the cloud using the mouse and keyboard shortcuts. When you create a semantic segmentation job, workers have the following tools available to them: 
+ A paint brush to paint and unpaint objects. Workers paint objects by selecting a label category and then painting in the 3D point cloud. Workers unpaint objects by selecting the Unpaint option from the label category menu and using the paint brush to erase paint. 
+ A polygon tool that workers can use to select and paint an area in the point cloud. 
+ A background paint tool, which enables workers to paint behind objects they have already annotated without altering the original annotations. For example, workers might use this tool to paint the road after painting all of the cars on the road. 
+ View options that enable workers to easily hide or view label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

# Output data for a 3D point cloud semantic segmentation job
<a name="sms-point-cloud-semantic-segmentation-input-data"></a>

When you create a 3D point cloud semantic segmentation labeling job, tasks are sent to workers. When these workers complete their tasks, their annotations are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D point cloud object detection output data format, see [3D point cloud semantic segmentation output](sms-data-output.md#sms-output-point-cloud-segmentation). 

# Understand the 3D-2D point cloud object tracking task type
<a name="sms-point-cloud-3d-2d-object-tracking"></a>

Use this task type when you want workers to link 3D point cloud annotations with 2D images annotations and also link 2D image annotations among various cameras. Currently, Ground Truth supports cuboids for annotation in a 3D point cloud and bounding boxes for annotation in 2D videos. For example, you can use this task type to ask workers to link the movement of a vehicle in 3D point cloud with its 2D video. Using 3D-2D linking, you can easily correlate point cloud data (like the distance of a cuboid) to video data (bounding box) for up to 8 cameras.

 Ground Truth provides workers with tools to annotate cuboids in a 3D point cloud and bounding boxes in up to 8 cameras using the same annotation UI. Workers can also link various bounding boxes for the same object across different cameras. For example, a bounding box in camera1 can be linked to a bounding box in camera2. This lets you to correlate an object across multiple cameras using a unique ID. 

**Note**  
Currently, SageMaker AI does not support creating a 3D-2D linking job using the console. To create a 3D-2D linking job using the SageMaker API, see [Create a labeling job (API)](sms-3d-2d-point-cloud-object-tracking-create-labeling-job.md#sms-point-cloud-3d-2d-object-tracking-create-labeling-job-api). 

The following topics explain how to create a 3D-2D point cloud object tracking labeling job, show what the worker task interface looks like (what workers see when they work on this task), and provide an overview of the output data you get when workers complete their tasks.

**Topics**
+ [Create a 3D-2D point cloud object tracking labeling job](sms-3d-2d-point-cloud-object-tracking-create-labeling-job.md)
+ [View the worker task interface for a 3D-2D object tracking labeling job](sms-point-cloud-3d-2d-object-tracking-worker-ui.md)
+ [Output data for a 3D-2D object tracking labeling job](sms-point-cloud-3d-2d-object-tracking-output-data.md)

# Create a 3D-2D point cloud object tracking labeling job
<a name="sms-3d-2d-point-cloud-object-tracking-create-labeling-job"></a>

You can create a 3D-2D point cloud labeling job using the SageMaker API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following: 
+ A work team from a private or vendor workforce. You cannot use Amazon Mechanical Turk for 3D point cloud labeling jobs. To learn how to create workforces and work teams, see [Workforces](sms-workforce-management.md).
+ Add a CORS policy to an S3 bucket that contains input data in the Amazon S3 console. To set the required CORS headers on the S3 bucket that contains your input images in the S3 console, follow the directions detailed in [CORS Permission Requirement](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-cors-update.html).
+ Additionally, make sure that you have reviewed and satisfied the [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md). 

To learn how to create a labeling job using the API, see the following sections. 

## Create a labeling job (API)
<a name="sms-point-cloud-3d-2d-object-tracking-create-labeling-job-api"></a>

This section covers details you need to know when you create a 3D-2D object tracking labeling job using the SageMaker API operation `CreateLabelingJob`. This API defines this operation for all AWS SDKs. To see a list of language-specific SDKs supported for this operation, review the **See Also** section of [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). 

[Create a Labeling Job (API)](sms-create-labeling-job-api.md) provides an overview of the `CreateLabelingJob` operation. Follow these instructions and do the following while you configure your request: 
+ You must enter an ARN for `HumanTaskUiArn`. Use `arn:aws:sagemaker:<region>:394669845002:human-task-ui/PointCloudObjectTracking`. Replace `<region>` with the AWS Region you are creating the labeling job in. 

  There should not be an entry for the `UiTemplateS3Uri` parameter. 
+ Your [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelAttributeName) must end in `-ref`. For example, `ot-labels-ref`. 
+ Your input manifest file must be a point cloud frame sequence manifest file. For more information, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). You also need to provide a label category configuration file as mentioned above.
+ You need to provide pre-defined ARNs for the pre-annotation and post-annotation (ACS) Lambda functions. These ARNs are specific to the AWS Region you use to create your labeling job. 
  + To find the pre-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `PRE-3DPointCloudObjectTracking`. 
  + To find the post-annotation Lambda ARN, refer to [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_AnnotationConsolidationConfig.html#sagemaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn). Use the Region you are creating your labeling job in to find the correct ARN that ends with `ACS-3DPointCloudObjectTracking`. 
+ The number of workers specified in `NumberOfHumanWorkersPerDataObject` should be `1`. 
+ Automated data labeling is not supported for 3D point cloud labeling jobs. You should not specify values for parameters in `[LabelingJobAlgorithmsConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html#sagemaker-CreateLabelingJob-request-LabelingJobAlgorithmsConfig)`. 
+ 3D-2D object tracking labeling jobs can take multiple hours to complete. You can specify a longer time limit for these labeling jobs in `TaskTimeLimitInSeconds` (up to 7 days, or 604,800 seconds). 

**Note**  
After you have successfully created a 3D-2D object tracking job, it shows up on the console under labeling jobs. The task type for the job is displayed as **Point Cloud Object Tracking**.

## Input data format
<a name="sms-point-cloud-3d-2d-object-tracking-input-data"></a>

You can create a 3D-2D object tracking job using the SageMaker API operation, [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html). To create a labeling job for this task type you need the following:
+ A sequence input manifest file. To learn how to create this type of manifest file, see [Create a Point Cloud Sequence Input Manifest](sms-point-cloud-multi-frame-input-data.md). If you are a new user of Ground Truth 3D point cloud labeling modalities, we recommend that you review [Accepted Raw 3D Data Formats](sms-point-cloud-raw-data-types.md). 
+ You specify your labels, label category and frame attributes, and worker instructions in a label category configuration file. For more information, see [Create a Labeling Category Configuration File with Label Category and Frame Attributes](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-label-cat-config-attributes.html) to learn how to create this file. The following is an example showing a label category configuration file for creating a 3D-2D object tracking job.

  ```
  {
      "document-version": "2020-03-01",
      "categoryGlobalAttributes": [
          {
              "name": "Occlusion",
              "description": "global attribute that applies to all label categories",
              "type": "string",
              "enum":[
                  "Partial",
                  "Full"
              ]
          }
      ],
      "labels":[
          {
              "label": "Car",
              "attributes": [
                  {
                      "name": "Type",
                      "type": "string",
                      "enum": [
                          "SUV",
                          "Sedan"
                      ]
                  } 
              ]
          },
          {
              "label": "Bus",
              "attributes": [
                  {
                      "name": "Size",
                      "type": "string",
                      "enum": [
                          "Large",
                          "Medium",
                          "Small"
                      ]
                  }
              ]
          }
      ],
      "instructions": {
          "shortIntroduction": "Draw a tight cuboid around objects after you select a category.",
          "fullIntroduction": "<p>Use this area to add more detailed worker instructions.</p>"
      },
      "annotationType": [
          {
              "type": "BoundingBox"
          },
          {
              "type": "Cuboid"
          }
      ]
  }
  ```
**Note**  
You need to provide `BoundingBox` and `Cuboid` as annotationType in the label category configuration file to create a 3D-2D object tracking job. 

# View the worker task interface for a 3D-2D object tracking labeling job
<a name="sms-point-cloud-3d-2d-object-tracking-worker-ui"></a>

Ground Truth provides workers with a web portal and tools to complete your 3D-2D object tracking annotation tasks. When you create the labeling job, you provide the Amazon Resource Name (ARN) for a pre-built Ground Truth UI in the `HumanTaskUiArn` parameter. To use the UI when you create a labeling job for this task type using the API, you need to provide the `HumanTaskUiArn`. You can preview and interact with the worker UI when you create a labeling job through the API. The annotating tools are a part of the worker task interface. They are not available for the preview interface. The following image demonstrates the worker task interface used for the 3D-2D point cloud object tracking annotation task.

![\[The worker task interface used for the 3D-2D point cloud object tracking annotation task.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms-sensor-fusion.png)


When interpolation is enabled by default. After a worker adds a single cuboid, that cuboid is replicated in all frames of the sequence with the same ID. If the worker adjusts the cuboid in another frame, Ground Truth interpolates the movement of that object and adjust all cuboids between the manually adjusted frames. Additionally, using the camera view section, a cuboid can be shown with a projection (using to B button for "toggle labels" in the camera view) that provides the worker with a reference from the camera images. The accuracy of the cuboid to image projection is based on accuracy of calibrations captured in the extrinsic and intrinsinc data.

If you provide camera data for sensor fusion, images are matched up with scenes in point cloud frames. Note that the camera data should be time synchronized with the point cloud data to ensure an accurate depiction of point cloud to imagery over each frame in the sequence as shown in the following image.

![\[The manifest file, the worker portal with point cloud data and the camera data.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/3d_2d_link_ss.png)


The manifest file holds the extrinsic and intrinsic data and the pose to allow the cuboid projection on the camera image to be shown by using the **P button**.

Worker can navigate in the 3D scene using their keyboard and mouse. They can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once a worker places a cuboids in the 3D scene, a side-view appears with the three projected side views: top, side, and front. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse.

The worker should first select the cuboid to draw a corresponding bounding box on any of the camera views. This links the cuboid and the bounding box with a common name and unique ID.

The worker can also first draw a bounding box, select it and draw the corresponding cuboid to link them.

Additional view options and features are available. See the [worker instruction page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-worker-instructions-object-tracking.html) for a comprehensive overview of the Worker UI. 

## Worker tools
<a name="sms-point-cloud-object-tracking-worker-tools"></a>

Workers can navigate through the 3D point cloud by zooming in and out, and moving in all directions around the cloud using the mouse and keyboard shortcuts. If workers click on a point in the point cloud, the UI automatically zooms into that area. Workers can use various tools to draw 3D cuboid around objects. For more information, see **Assistive Labeling Tools** in the following discussion. 

After workers have placed a 3D cuboid in the point cloud, they can adjust these cuboids to fit tightly around cars using a variety of views: directly in the 3D point cloud, in a side-view featuring three zoomed-in perspectives of the point cloud around the box, and if you include images for sensor fusion, directly in the 2D image. 

Additional view options enable workers to easily hide or view label text, a ground mesh, and additional point attributes. Workers can also choose between perspective and orthogonal projections. 

**Assistive Labeling Tools**  
Ground Truth helps workers annotate 3D point clouds faster and more accurately using UX, machine learning and computer vision powered assistive labeling tools for 3D point cloud object tracking tasks. The following assistive labeling tools are available for this task type:
+ **Label autofill** – When a worker adds a cuboid to a frame, a cuboid with the same dimensions, orientation and xyz position is automatically added to all frames in the sequence. 
+ **Label interpolation** – After a worker has labeled a single object in two frames, Ground Truth uses those annotations to interpolate the movement of that object between all the frames. Label interpolation can be turned on and off. It is on by default. For example, if a worker working with 5 frames adds a cuboid in frame 2, it is copied to all the 5 frames. If the worker then makes adjustments in frame 4, frame 2 and 4 now act as two points, through which a line is fit. The cuboid is then interpolated in frames 1,3 and 5.
+ **Bulk label and attribute management** – Workers can add, delete, and rename annotations, label category attributes, and frame attributes in bulk.
  + Workers can manually delete annotations for a given object before and after a frame, or in all frames. For example, a worker can delete all labels for an object after frame 10 if that object is no longer located in the scene after that frame. 
  + If a worker accidentally bulk deletes all annotations for a object, they can add them back. For example, if a worker deletes all annotations for an object before frame 100, they can bulk add them to those frames. 
  + Workers can rename a label in one frame and all 3D cuboids assigned that label are updated with the new name across all frames. 
  + Workers can use bulk editing to add or edit label category attributes and frame attributes in multiple frames.
+ **Snapping** – Workers can add a cuboid around an object and use a keyboard shortcut or menu option to have Ground Truth's autofit tool snap the cuboid tightly around the object's boundaries. 
+ **Fit to ground** – After a worker adds a cuboid to the 3D scene, the worker can automatically snap the cuboid to the ground. For example, the worker can use this feature to snap a cuboid to the road or sidewalk in the scene. 
+ **Multi-view labeling** – After a worker adds a 3D cuboid to the 3D scene, a side-panel displays front and two side perspectives to help the worker adjust the cuboid tightly around the object. Workers can annotation the 3D point cloud, the side panel and the adjustments appear in the other views in real time. 
+ **Sensor fusion** – If you provide data for sensor fusion, workers can adjust annotations in the 3D scenes and in 2D images, and the annotations are projected into the other view in real time. To learn more about the data for sensor fusion, see [Understand Coordinate Systems and Sensor Fusion](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-sensor-fusion-details.html#sms-point-cloud-sensor-fusion).
+ **Auto-merge cuboids **– Workers can automatically merge two cuboids across all frames if they determine that cuboids with different labels actually represent a single object. 
+ **View options **– Enables workers to easily hide or view label text, a ground mesh, and additional point attributes like color or intensity. Workers can also choose between perspective and orthogonal projections. 

# Output data for a 3D-2D object tracking labeling job
<a name="sms-point-cloud-3d-2d-object-tracking-output-data"></a>

When you create a 3D-2D object tracking labeling job, tasks are sent to workers. When these workers complete their tasks, their annotations are written to the Amazon S3 bucket you specified when you created the labeling job. The output data format determines what you see in your Amazon S3 bucket when your labeling job status ([LabelingJobStatus](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeLabelingJob.html#API_DescribeLabelingJob_ResponseSyntax)) is `Completed`. 

If you are a new user of Ground Truth, see [Labeling job output data](sms-data-output.md) to learn more about the Ground Truth output data format. To learn about the 3D-2D point cloud object tracking output data format, see [3D-2D object tracking point cloud object tracking output](sms-data-output.md#sms-output-3d-2d-point-cloud-object-tracking). 

# 3D point cloud labeling jobs overview
<a name="sms-point-cloud-general-information"></a>

This topic provides an overview of the unique features of a Ground Truth 3D point cloud labeling job. You can use the 3D point cloud labeling jobs to have workers label objects in a 3D point cloud generated from a 3D sensors like LiDAR and depth cameras or generated from 3D reconstruction by stitching images captured by an agent like a drone. 

## Job pre-processing time
<a name="sms-point-cloud-job-creation-time"></a>

When you create a 3D point cloud labeling job, you need to provide an [input manifest file](sms-point-cloud-input-data.md). The input manifest file can be:
+ A *frame input manifest file* that has a single point cloud frame on each line. 
+ A *sequence input manifest file* that has a single sequence on each line. A sequence is defined as a temporal series of point cloud frames. 

For both types of manifest files, *job pre-processing time* (that is, the time before Ground Truth starts sending tasks to your workers) depends on the total number and size of point cloud frames you provide in your input manifest file. For frame input manifest files, this is the number of lines in your manifest file. For sequence manifest files, this is the number of frames in each sequence multiplied by the total number of sequences, or lines, in your manifest file. 

Additionally, the number of points per point cloud and the number of fused sensor data objects (like images) factor into job pre-processing times. On average, Ground Truth can pre-process 200 point cloud frames in approximately 5 minutes. If you create a 3D point cloud labeling job with a large number of point cloud frames, you might experience longer job pre-processing times. For example, if you create a sequence input manifest file with 4 point cloud sequences, and each sequence contains 200 point clouds, Ground Truth pre-processes 800 point clouds and so your job pre-processing time might be around 20 minutes. During this time, your labeling job status is `InProgress`. 

While your 3D point cloud labeling job is pre-processing, you receive CloudWatch messages notifying you of the status of your job. To identify these messages, search for `3D_POINT_CLOUD_PROCESSING_STATUS` in your labeling job logs. 

For **frame input manifest files**, your CloudWatch logs will have a message similar to the following:

```
{
    "labeling-job-name": "example-point-cloud-labeling-job",
    "event-name": "3D_POINT_CLOUD_PROCESSING_STATUS",
    "event-log-message": "datasetObjectId from: 0 to 10, status: IN_PROGRESS"
}
```

The event log message, `datasetObjectId from: 0 to 10, status: IN_PROGRESS` identifies the number of frames from your input manifest that have been processed. You receive a new message every time a frame has been processed. For example, after a single frame has processed, you receive another message that says `datasetObjectId from: 1 to 10, status: IN_PROGRESS`. 

For **sequence input manifest files**, your CloudWatch logs will have a message similar to the following:

```
{
    "labeling-job-name": "example-point-cloud-labeling-job",
    "event-name": "3D_POINT_CLOUD_PROCESSING_STATUS",
    "event-log-message": "datasetObjectId: 0, status: IN_PROGRESS"
}
```

The event log message, `datasetObjectId from: 0, status: IN_PROGRESS` identifies the number of sequences from your input manifest that have been processed. You receive a new message every time a sequence has been processed. For example, after a single sequence has processed, you receive a message that says `datasetObjectId from: 1, status: IN_PROGRESS` as the next sequence begins processing. 

## Job completion times
<a name="sms-point-cloud-job-completion-times"></a>

3D point cloud labeling jobs can take workers hours to complete. You can set the total amount of time that workers can work on each task when you create a labeling job. The maximum time you can set for workers to work on tasks is 7 days. The default value is 3 days. 

It is strongly recommended that you create tasks that workers can complete within 12 hours. Workers must keep the worker UI open while working on a task. They can save work as they go and Ground Truth will save their work every 15 minutes.

When using the SageMaker AI `CreateLabelingJob` API operation, set the total time a task is available to workers in the `TaskTimeLimitInSeconds` parameter of `HumanTaskConfig`. 

When you create a labeling job in the console, you can specify this time limit when you select your workforce type and your work team.

## Workforces
<a name="sms-point-cloud-workforces"></a>

When you create a 3D point cloud labeling job, you need to specify a work team that will complete your point cloud annotation tasks. You can choose a work team from a private workforce of your own workers, or from a vendor workforce that you select in the AWS Marketplace. You cannot use the Amazon Mechanical Turk workforce for 3D point cloud labeling jobs. 

To learn more about vendor workforce, see [Subscribe to vendor workforces](sms-workforce-management-vendor.md).

To learn how to create and manage a private workforce, see [Private workforce](sms-workforce-private.md).

## Worker user interface (UI)
<a name="sms-point-cloud-worker-task-ui"></a>

Ground Truth provides a worker user interface (UI), tools, and assistive labeling features to help workers complete your 3D point cloud labeling tasks. 

You can preview the worker UI when you create a labeling job in the console.

When you create a labeling job using the API operation `CreateLabelingJob`, you must provide an ARN provided by Ground Truth in the parameter [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UiConfig.html#sagemaker-Type-UiConfig-UiTemplateS3Uri](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UiConfig.html#sagemaker-Type-UiConfig-UiTemplateS3Uri) to specify the worker UI for your task type. You can use `HumanTaskUiArn` with the SageMaker AI [https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_RenderUiTemplate.html](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_RenderUiTemplate.html) API operation to preview the worker UI. 

You provide worker instructions, labels, and optionally, label category attributes that are displayed in the worker UI.

### Label category attributes
<a name="sms-point-cloud-label-and-frame-attributes"></a>

When you create a 3D point cloud object tracking or object detection labeling job, you can add one or more *label category attributes*. You can add *frame attributes* to all 3D point cloud task types: 
+ **Label category attribute** – A list of options (strings), a free form text box, or a numeric field associated with one or more labels. It is used by workers to to provide metadata about a label. 
+ **Frame attribute** – A list of options (strings), a free form text box, or a numeric field that appears on each point cloud frame a worker is sent to annotate. It is used by workers to provide metadata about frames. 

Additionally, you can use label and frame attributes to have workers verify labels in a 3D point cloud label verification job. 

Use the following sections to learn more about these attributes. To learn how to add label category and frame attributes to a labeling job, use the **Create Labeling Job** section on the [task type page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-task-types) of your choice.

#### Label category attributes
<a name="sms-point-cloud-label-attributes"></a>

Add label category attributes to labels to give workers the ability to provide more information about the annotations they create. A label category attribute is added to an individual label, or to all labels. When a label category attribute is applied to all labels it is referred to as a *global label category attribute*. 

For example, if you add the label category *car*, you might also want to capture additional data about your labeled cars, such as if they are occluded or the size of the car. You can capture this metadata using label category attributes. In this example, if you added the attribute *occluded* to the car label category, you can assign *partial*, *completely*, *no* to the *occluded* attribute and enable workers to select one of these options. 

When you create a label verification job, you add labels category attributes to each label you want workers to verify.

#### Frame attributes
<a name="sms-point-cloud-frame-attributes"></a>

Add frame attributes to give workers the ability to provide more information about individual point cloud frames. You can specify up to 10 frame attributes, and these attributes will appear on all frames.

For example, you can add a frame attribute that allows workers to enter a number. You may want to use this attribute to have workers identify the number of objects they see in a particular frame. 

In another example, you may want to provide a free-form text box to give workers the ability to provide a free form answer to a question.

When you create a label verification job, you can add one or more frame attributes to ask workers to provide feedback on all labels in a point cloud frame. 

### Worker instructions
<a name="sms-point-cloud-worker-instructions-general"></a>

You can provide worker instructions to help your workers complete your point cloud labeling tasks. You might want to use these instructions to do the following:
+ Best practices and things to avoid when annotating objects.
+ Explanation of the label category attributes provided (for object detection and object tracking tasks), and how to use them.
+ Advice on how to save time while labeling by using keyboard shortcuts. 

You can add your worker instructions using the SageMaker AI console while creating a labeling job. If you create a labeling job using the API operation `CreateLabelingJob`, you specify worker instructions in your label category configuration file. 

In addition to your instructions, Ground Truth provides a link to help workers navigate and use the worker portal. View these instructions by selecting the task type on [Worker instructions](sms-point-cloud-worker-instructions.md). 

### Declining tasks
<a name="sms-decline-task-point-cloud"></a>

Workers are able to decline tasks. 

Workers decline a task if the instructions are not clear, input data is not displaying correctly, or if they encounter some other issue with the task. If the number of workers per dataset object ([https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-NumberOfHumanWorkersPerDataObject](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_HumanTaskConfig.html#sagemaker-Type-HumanTaskConfig-NumberOfHumanWorkersPerDataObject)) decline the task, the data object is marked as expired and will not be sent to additional workers.

# 3D point cloud labeling job permission requirements
<a name="sms-security-permission-3d-point-cloud"></a>

When you create a 3D point cloud labeling job, in addition to the permission requirements found in [Assign IAM Permissions to Use Ground Truth](sms-security-permission.md), you must add a CORS policy to your S3 bucket that contains your input manifest file. 

## Add a CORS permission policy to S3 bucket
<a name="sms-permissions-execution-role"></a>

When you create a 3D point cloud labeling job, you specify buckets in S3 where your input data and manifest file are located and where your output data will be stored. These buckets may be the same. You must attach the following Cross-origin resource sharing (CORS) policy to your input and output buckets. If you use the Amazon S3 console to add the policy to your bucket, you must use the JSON format.

**JSON**

```
[
        {
            "AllowedHeaders": [
                "*"
            ],
            "AllowedMethods": [
                "GET",
                "HEAD",
                "PUT"
            ],
            "AllowedOrigins": [
                "*"
            ],
            "ExposeHeaders": [
                "Access-Control-Allow-Origin"
            ],
            "MaxAgeSeconds": 3000
        }
    ]
```

**XML**

```
<?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
    </CORSConfiguration>
```

To learn how to add a CORS policy to an S3 bucket, see [How do I add cross-domain resource sharing with CORS?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-cors-configuration.html) in the Amazon Simple Storage Service User Guide.

# Worker instructions
<a name="sms-point-cloud-worker-instructions"></a>

This topic provides an overview of the Ground Truth worker portal and the tools available to complete your 3D Point Cloud labeling task. First, select the type of task you are working on from **Topics**. 

For adjustment jobs, select the original labeling job task type that produced the labels you are adjusting. Review and adjust the labels in your task as needed. 

**Important**  
It is recommended that you complete your task using a Google Chrome or Firefox web browser.

**Topics**
+ [3D point cloud semantic segmentation](sms-point-cloud-worker-instructions-semantic-segmentation.md)
+ [3D point cloud object detection](sms-point-cloud-worker-instructions-object-detection.md)
+ [3D point cloud object tracking](sms-point-cloud-worker-instructions-object-tracking.md)

# 3D point cloud semantic segmentation
<a name="sms-point-cloud-worker-instructions-semantic-segmentation"></a>

Use this page to become familiarize with the user interface and tools available to complete your 3D point cloud semantic segmentation task.

**Topics**
+ [Your Task](#sms-point-cloud-worker-instructions-ss-task)
+ [Navigate the UI](#sms-point-cloud-worker-instructions-worker-ui-ss)
+ [Icon Guide](#sms-point-cloud-worker-instructions-ss-icons)
+ [Shortcuts](#sms-point-cloud-worker-instructions-ss-hot-keys)
+ [Release, Stop and Resume, and Decline Tasks](#sms-point-cloud-worker-instructions-skip-reject-ss)
+ [Saving Your Work and Submitting](#sms-point-cloud-worker-instructions-saving-work-ss)

## Your Task
<a name="sms-point-cloud-worker-instructions-ss-task"></a>

When you work on a 3D point cloud semantic segmentation task, you need to select a category from the **Annotations** menu on the right side of your worker portal using the drop down menu **Label Categories**. After you've selected a category, use the paint brush and polygon tools to paint each object in the 3D point cloud that this category applies to. For example, if you select the category **Car**, you would use these tools to paint all of the cars in the point cloud. The following video demonstrates how to use the paint brush tool to paint an object. 

If you see one or more images in your worker portal, you can paint in the images or paint in the 3D point cloud and the paint will show up in the other medium. 

You may see frame attributes under the **Labels** menu. Use these attribute prompts to enter additional information about the point cloud. 

![\[Example frame attribute prompt.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms/frame-attributes.png)


**Important**  
If you see that objects have already been painted when you open the task, adjust those annotations.

The following video includes an image that can be annotated. You may not see an image in your task. 

![\[Gif showing how workers can use the 3D point cloud and 2D image together to paint objects.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss_paint_sf.gif)


After you've painted one or more objects using a label category, you can select that category from the Label Category menu on the right to only view points painted for that category. 

![\[Gif showing how workers can move around the 3D point cloud.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss-view-options.gif)


## Navigate the UI
<a name="sms-point-cloud-worker-instructions-worker-ui-ss"></a>

You can navigate in the 3D scene using their keyboard and mouse. You can:
+ Double click on specific objects in the point cloud to zoom into them.
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

The following video demonstrates movements around the 3D point cloud and in the side-view. You can hide and re-expand all side views using the full screen icon. In this GIF, the side-views and menus have been collapsed.

![\[Gif shows how a worker can use the 3D point cloud in the point cloud view UI.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/semantic_seg/ss_nav_worker_portal.gif)


When you are in the worker UI, you see the following menus:
+ **Instructions** – Review these instructions before starting your task.
+ **Shortcuts** – Use this menu to view keyboard shortcuts that you can use to navigate the point cloud and use the annotation tools provided. 
+ **View** – Use this menu to toggle different view options on and off. For example, you can use this menu to add a ground mesh to the point cloud, and to choose the projection of the point cloud. 
+ **3D Point Cloud** – Use this menu to add additional attributes to the points in the point cloud, such as color, and pixel intensity. Note that some or all of these options may not be available.
+ **Paint** – Use this menu to modify the functionality of the paint brush. 

When you open a task, the move scene icon is on, and you can move around the point cloud using your mouse and the navigation buttons in the point cloud area of the screen. To return to the original view you see when you first opened the task, choose the reset scene icon. 

After you select the paint icon, you can add paint to the point cloud and images (if included). You must select the move scene icon again to move to another area in the 3D point cloud or image. 

To collapse all panels on the right and make the 3D point cloud full screen, select the full screen icon. 

For the camera images and side-panels, you have the following view options:
+ **C** – View the camera angle on point cloud view.
+ **F** – View the frustum, or field of view, of the camera used to capture that image on point cloud view. 
+ **P** – View the point cloud overlaid on the image. 

## Icon Guide
<a name="sms-point-cloud-worker-instructions-ss-icons"></a>

Use this table to learn about the icons available in your worker task portal. 


| Icon | Name | Description | 
| --- | --- | --- | 
|  ![\[The Brush icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/brush.png)  |  brush  |  Choose this icon to turn on the brush tool. To use with this tool, choose and move over the objects that you want to paint with your mouse. After you choose it, everything you paint be associated with the category you chose.  | 
|  ![\[The Polygon icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/polygon.png)  |  polygon  |  Choose this icon to use the polygon paint tool. Use this tool to draw polygons around objects that you want to paint. After you choose it, everything you draw a polygon around will be associated with the category you have chosen.  | 
|  ![\[The Reset scene icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/fit_scene.png)  |  reset scene  | Choose this icon to reset the view of the point cloud, side panels, and if applicable, all images to their original position when the task was first opened.  | 
|  ![\[The Move scene icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/move_scene.png)  |  move scene  |  Choose this icon to move the scene. By default, this icon will be selected when you first start a task.   | 
|  ![\[The Full screen icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/fullscreen.png)  |  full screen   |  Choose this icon to make the 3D point cloud visualization full screen, and to collapse all side panels.  | 
|  ![\[The Ruler icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/Ruler_icon.png)  |  ruler  |  Use this icon to measure distances, in meters, in the point cloud. You may want to use this tool if your instructions ask you to annotate all objects in a given distance from the center of the cuboid or the object used to capture data. When you select this icon, you can place the starting point (first marker) anywhere in the point cloud by selecting it with your mouse. The tool will automatically use interpolation to place a marker on the closest point within threshold distance to the location you select, otherwise the marker will be placed on ground. If you place a starting point by mistake, you can use the Escape key to revert marker placement.  After you place the first marker, you see a dotted line and a dynamic label that indicates the distance you have moved away from the first marker. Click somewhere else on the point cloud to place a second marker. When you place the second marker, the dotted line becomes solid, and the distance is set.  After you set a distance, you can edit it by selecting either marker. You can delete a ruler by selecting anywhere on the ruler and using the Delete key on your keyboard.   | 

## Shortcuts
<a name="sms-point-cloud-worker-instructions-ss-hot-keys"></a>

The shortcuts listed in the **Shortcuts** menu can help you navigate the 3D point cloud and use the paint tool. 

Before you start your task, it is recommended that you review the **Shortcuts** menu and become acquainted with these commands. 

## Release, Stop and Resume, and Decline Tasks
<a name="sms-point-cloud-worker-instructions-skip-reject-ss"></a>

When you open the labeling task, three buttons on the top right allow you to decline the task (**Decline task**), release it (**Release task**), and stop and resume it at a later time (**Stop and resume later**). The following list describes what happens when you select one of these options:
+ **Decline task**: You should only decline a task if something is wrong with the task, such as an issue with the 3D point cloud, images or the UI. If you decline a task, you will not be able to return to the task.
+ **Release Task**: If you release a task, you loose all work done on that task. When the task is released, other workers on your team can pick it up. If enough workers pick up the task, you may not be able to return to it. When you select this button and then select **Confirm**, you are returned to the worker portal. If the task is still available, its status will be **Available**. If other workers pick it up, it will disappear from your portal. 
+ **Stop and resume later**: You can use the **Stop and resume later** button to stop working and return to the task at a later time. You should use the **Save** button to save your work before you select **Stop and resume later**. When you select this button and then select **Confirm**, you are returned to the worker portal, and the task status is **Stopped**. You can select the same task to resume work on it. 

  Be aware that the person that creates your labeling tasks specifies a time limit in which all tasks much be completed by. If you do not return to and complete this task within that time limit, it will expire and your work will not be submitted. Contact your administrator for more information. 

## Saving Your Work and Submitting
<a name="sms-point-cloud-worker-instructions-saving-work-ss"></a>

You should periodically save your work. Ground Truth will automatically save your work ever 15 minutes. 

When you open a task, you must complete your work on it before pressing **Submit**. 

# 3D point cloud object detection
<a name="sms-point-cloud-worker-instructions-object-detection"></a>

Use this page to familiarize yourself with the user interface and tools available to complete your 3D point cloud object detection task.

**Topics**
+ [Your Task](#sms-point-cloud-worker-instructions-od-task)
+ [Navigate the UI](#sms-point-cloud-worker-instructions-worker-ui-od)
+ [Icon Guide](#sms-point-cloud-worker-instructions-od-icons)
+ [Shortcuts](#sms-point-cloud-worker-instructions-od-hot-keys)
+ [Release, Stop and Resume, and Decline Tasks](#sms-point-cloud-worker-instructions-skip-reject-od)
+ [Saving Your Work and Submitting](#sms-point-cloud-worker-instructions-saving-work-od)

## Your Task
<a name="sms-point-cloud-worker-instructions-od-task"></a>

When you work on a 3D point cloud object detection task, you need to select a category from the **Annotations** menu on the right side of your worker portal using the **Label Categories** menu. After you've chosen a category, use the add cuboid and fit cuboid tools to fit a cuboid around objects in the 3D point cloud that this category applies to. After you place a cuboid, you can modify its dimensions, location, and orientation directly in the point cloud, and the three panels shown on the right. 

If you see one or more images in your worker portal, you can also modify cuboids in the images or in the 3D point cloud and the edits will show up in the other medium. 

If you see cuboids have already been added to the 3D point cloud when you open your task, adjust those cuboids and add additional cuboids as needed. 

To edit a cuboid, including moving, re-orienting, and changing cuboid dimensions, you must use shortcut keys. You can see a full list of shortcut keys in the **Shortcuts** menu in your UI. The following are important key-combinations that you should become familiar with before starting your labeling task. 


****  

| Mac Command | Windows Command | Action | 
| --- | --- | --- | 
|  Cmd \$1 Drag  |  Ctrl \$1 Drag  |  Modify the dimensions of the cuboid.  | 
|  Option \$1 Drag  |  Alt \$1 Drag  |   Move the cuboid.   | 
|  Shift \$1 Drag  |  Shift \$1 Drag  |  Rotate the cuboid.   | 
|  Option \$1 O  |  Alt \$1 O  |  Fit the cuboid tightly around the points it has been drawn around. Before using the option, make sure the cuboid fully-surrounds the object of interest.   | 
|  Option \$1 G  |  Alt \$1 G  |  Set the cuboid to the ground.   | 

Individual labels may have one or more label attributes. If a label has a label attribute associated with it, it will appear when you select the downward pointing arrow next to the label from the **Label Id** menu. Fill in required values for all label attributes. 

You may see frame attributes under the **Labels** menu. Use these attribute prompts to enter additional information about each frame. 

![\[Example frame attribute prompt.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms/frame-attributes.png)


## Navigate the UI
<a name="sms-point-cloud-worker-instructions-worker-ui-od"></a>

You can navigate in the 3D scene using your keyboard and mouse. You can:
+ Double click on specific objects in the point cloud to zoom into them. 
+ You can use the [ and ] keys on your keyboard to zoom into and move from one label to the next. If no label is selected, when you select [ or ], the UI will zoom into the first label in the **Lable Id** list. 
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once you place a cuboids in the 3D scene, a side-view will appear with three projected views: top, side, and back. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse. 

The following video demonstrates movements around the 3D point cloud and in the side-view. 

![\[Gif showing movements around the 3D point cloud and the side-view.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_detection/navigate_od_worker_ui.gif)


When you are in the worker UI, you see the following menus:
+ **Instructions** – Review these instructions before starting your task.
+ **Shortcuts** – Use this menu to view keyboard shortcuts that you can use to navigate the point cloud and use the annotation tools provided. 
+ **Label** – Use this menu to modify a cuboid. First, select a cuboid, and then choose an option from this menu. This menu includes assistive labeling tools like setting a cuboid to the ground and automatically fitting the cuboid to the object's boundaries. 
+ **View** – Use this menu to toggle different view options on and off. For example, you can use this menu to add a ground mesh to the point cloud, and to choose the projection of the point cloud. 
+ **3D Point Cloud** – Use this menu to add additional attributes to the points in the point cloud, such as color, and pixel intensity. Note that these options may not be available.

When you open a task, the move scene icon is on, and you can move around the point cloud using your mouse and the navigation buttons in the point cloud area of the screen. To return to the original view you see when you first opened the task, choose the reset scene icon. Resetting the view will not modify your annotations. 

After you select the add cuboid icon, you can add cuboids to the 3D point cloud visualization. Once you've added a cuboid, you can adjust it in the three views (top, side, and front) and in the images (if included). 

![\[Gif showing how a worker can annotate a 3D point cloud in the Ground Truth worker portal.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_detection/ot_basic_tools.gif)


You must choose the move scene icon again to move to another area in the 3D point cloud or image. 

To collapse all panels on the right and make the 3D point cloud full-screen, choose the full screen icon. 

If camera images are included, you may have the following view options:
+ **C** – View the camera angle on point cloud view.
+ **F** – View the frustum, or field of view, of the camera used to capture that image on point cloud view. 
+ **P** – View the point cloud overlaid on the image.
+ **B** – View cuboids in the image. 

The following video demonstrates how to use these view options. The **F** option is used to view the field of view of the camera (the gray area), the **C** options shows the direction the camera is facing and angle of the camera (blue lines), and the **B** option is used to view the cuboid. 

![\[Gif showing how to use various view options.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/view-options-side.gif)


## Icon Guide
<a name="sms-point-cloud-worker-instructions-od-icons"></a>

Use this table to learn about the icons you see in your worker task portal. 


| Icon | Name | Description | 
| --- | --- | --- | 
|  ![\[The Add cuboid icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/add_cuobid.png)  |  add cuboid  |  Choose this icon to add a cuboid. Each cuboid you add is associated with the category you chose.   | 
|  ![\[The Edit cuboid icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/edit_cuboid.png)  |  edit cuboid  |  Choose this icon to edit a cuboid. After you have added a cuboid, you can edit its dimensions, location, and orientation. After a cuboid is added, it automatically switches to edit cuboid mode.   | 
|  ![\[The Ruler icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/Ruler_icon.png)  |  ruler  |  Use this icon to measure distances, in meters, in the point cloud. You may want to use this tool if your instructions ask you to annotate all objects in a given distance from the center of the cuboid or the object used to capture data. When you select this icon, you can place the starting point (first marker) anywhere in the point cloud by selecting it with your mouse. The tool will automatically use interpolation to place a marker on the closest point within threshold distance to the location you select, otherwise the marker will be placed on ground. If you place a starting point by mistake, you can use the Escape key to revert marker placement.  After you place the first marker, you see a dotted line and a dynamic label that indicates the distance you have moved away from the first marker. Click somewhere else on the point cloud to place a second marker. When you place the second marker, the dotted line becomes solid, and the distance is set.  After you set a distance, you can edit it by selecting either marker. You can delete a ruler by selecting anywhere on the ruler and using the Delete key on your keyboard.   | 
|  ![\[The Reset scene icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/fit_scene.png)  |  reset scene  |  Choose this icon to reset the view of the point cloud, side panels, and if applicable, all images to their original position when the task was first opened.   | 
|  ![\[The Move scene icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/move_scene.png)  |  move scene  |  Choose this icon to move the scene. By default, this icon is chosen when you first start a task.   | 
|  ![\[The Full screen icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/fullscreen.png)  |  full screen   |  Choose this icon to make the 3D point cloud visualization full screen, and to collapse all side panels.  | 
|  ![\[The Show labels icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/label-icons/show.png)  | show labels |  Show labels in the 3D point cloud visualization, and if applicable, in images.   | 
|  ![\[The Hide labels icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/label-icons/hide.png)  | hide labels |  Hide labels in the 3D point cloud visualization, and if applicable, in images.   | 
|  ![\[The Delete labels icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/label-icons/delete.png)  | delete labels |  Delete a label.   | 

## Shortcuts
<a name="sms-point-cloud-worker-instructions-od-hot-keys"></a>

The shortcuts listed in the **Shortcuts** menu can help you navigate the 3D point cloud and use tools to add and edit cuboids. 

Before you start your task, it is recommended that you review the **Shortcuts** menu and become acquainted with these commands. You need to use some of the 3D cuboid controls to edit your cuboid. 

## Release, Stop and Resume, and Decline Tasks
<a name="sms-point-cloud-worker-instructions-skip-reject-od"></a>

When you open the labeling task, three buttons on the top right allow you to decline the task (**Decline task**), release it (**Release task**), and stop and resume it at a later time (**Stop and resume later**). The following list describes what happens when you select one of these options:
+ **Decline task**: You should only decline a task if something is wrong with the task, such as an issue with the 3D point cloud, images or the UI. If you decline a task, you will not be able to return to the task.
+ **Release Task**: If you release a task, you loose all work done on that task. When the task is released, other workers on your team can pick it up. If enough workers pick up the task, you may not be able to return to it. When you select this button and then select **Confirm**, you are returned to the worker portal. If the task is still available, its status will be **Available**. If other workers pick it up, it will disappear from your portal. 
+ **Stop and resume later**: You can use the **Stop and resume later** button to stop working and return to the task at a later time. You should use the **Save** button to save your work before you select **Stop and resume later**. When you select this button and then select **Confirm**, you are returned to the worker portal, and the task status is **Stopped**. You can select the same task to resume work on it. 

  Be aware that the person that creates your labeling tasks specifies a time limit in which all tasks much be completed by. If you do not return to and complete this task within that time limit, it will expire and your work will not be submitted. Contact your administrator for more information. 

## Saving Your Work and Submitting
<a name="sms-point-cloud-worker-instructions-saving-work-od"></a>

You should periodically save your work. Ground Truth will automatically save your work ever 15 minutes. 

When you open a task, you must complete your work on it before pressing **Submit**.

# 3D point cloud object tracking
<a name="sms-point-cloud-worker-instructions-object-tracking"></a>

Use this page to become familiarize with the user interface and tools available to complete your 3D point cloud object detection task.

**Topics**
+ [Your Task](#sms-point-cloud-worker-instructions-ot-task)
+ [Navigate the UI](#sms-point-cloud-worker-instructions-worker-ui-ot)
+ [Bulk Edit Label Category and Frame Attributes](#sms-point-cloud-worker-instructions-ot-bulk-edit)
+ [Icon Guide](#sms-point-cloud-worker-instructions-ot-icons)
+ [Shortcuts](#sms-point-cloud-worker-instructions-ot-hot-keys)
+ [Release, Stop and Resume, and Decline Tasks](#sms-point-cloud-worker-instructions-skip-reject-ot)
+ [Saving Your Work and Submitting](#sms-point-cloud-worker-instructions-saving-work-ot)

## Your Task
<a name="sms-point-cloud-worker-instructions-ot-task"></a>

When you work on a 3D point cloud object tracking task, you need to select a category from the **Annotations** menu on the right side of your worker portal using the **Label Categories** menu. After you've selected a category, use the add cuboid and fit cuboid tools to fit a cuboid around objects in the 3D point cloud that this category applies to. After you place a cuboid, you can modify its location, dimensions, and orientation directly in the point cloud, and the three panels shown on the right. If you see one or more images in your worker portal, you can also modify cuboids in the images or in the 3D point cloud and the edits will show up in the other medium. 

**Important**  
If you see cuboids have already been added to the 3D point cloud frames when you open your task, adjust those cuboids and add additional cuboids as needed. 

To edit a cuboid, including moving, re-orienting, and changing cuboid dimensions, you must use shortcut keys. You can see a full list of shortcut keys in the **Shortcuts** menu in your UI. The following are important key-combinations that you should become familiar with before starting your labeling task. 


****  

| Mac Command | Windows Command | Action | 
| --- | --- | --- | 
|  Cmd \$1 Drag  |  Ctrl \$1 Drag  |  Modify the dimensions of the cuboid. | 
|  Option \$1 Drag  |  Alt \$1 Drag  |   Move the cuboid.   | 
|  Shift \$1 Drag  |  Shift \$1 Drag  |  Rotate the cuboid.   | 
|  Option \$1 O  |  Alt \$1 O  |  Fit the cuboid tightly around the points it has been drawn around. Before using the option, make sure the cuboid fully-surrounds the object of interest.   | 
|  Option \$1 G  |  Alt \$1 G  |  Set the cuboid to the ground.   | 

When you open your task, two frames will be loaded. If your task includes more than two frames, you need to use the navigation bar in the lower-left corner, or the load frames icon to load additional frames. You should annotate and adjust labels in all frames before submitting. 

After you fit a cuboid tightly around the boundaries of an object, navigate to another frame using the navigation bar in the lower-left corner of the UI. If that same object has moved to a new location, add another cuboid and fit it tightly around the boundaries of the object. Each time you manually add a cuboid, you see the frame sequence bar in the lower-left corner of the screen turn red where that frame is located temporally in the sequence.

Your UI automatically infers the location of that object in all other frames after you've placed a cuboid. This is called *interpolation*. You can see the movement of that object, and the inferred and manually created cuboids using the arrows. Adjust inferred cuboids as needed. The following video demonstrates how to navigate between frames. The following video shows how, if you add a cuboid in one frame, and then adjust it in another, your UI will automatically infer the location of the cuboid in all of the frames in-between.

![\[Gif showing how the location of a cuboid is inferred in in-between frames.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/label-interpolation.gif)


**Tip**  
You can turn off the automatic cuboid interpolation across frames using the 3D Point Cloud menu item. Select **3D Point Cloud** from the top-menu, and then select **Interpolate Cuboids Across Frames**. This will uncheck this option and stop cuboid interpolation. You can reselect this item to turn cuboid interpolation back on.   
Turning cuboid interpolation off will not impact cuboids that have already been interpolated across frames. 

Individual labels may have one or more label attributes. If a label has a label attribute associated with it, it will appear when you select the downward pointing arrow next to the label from the **Label Id** menu. Fill in required values for all label attributes. 

You may see frame attributes under the **Label Id** menu. These attributes will appear on each frame in your task. Use these attribute prompts to enter additional information about each frame. 

![\[Example frame attribute prompt.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/sms/frame-attributes.png)


## Navigate the UI
<a name="sms-point-cloud-worker-instructions-worker-ui-ot"></a>

You can navigate in the 3D scene using your keyboard and mouse. You can:
+ Double click on specific objects in the point cloud to zoom into them.
+ You can use the [ and ] keys on your keyboard to zoom into and move from one label to the next. If no label is selected, when you select [ or ], the UI will zoom into the first label in the **Label Id** list. 
+ Use a mouse-scroller or trackpad to zoom in and out of the point cloud.
+ Use both keyboard arrow keys and Q, E, A, and D keys to move Up, Down, Left, Right. Use keyboard keys W and S to zoom in and out. 

Once you place a cuboids in the 3D scene, a side-view will appear with three projected views: top, side, and back. These side-views show points in and around the placed cuboid and help workers refine cuboid boundaries in that area. Workers can zoom in and out of each of those side-views using their mouse. 

The following video demonstrates movements around the 3D point cloud and in the side-view. 

![\[Gif shows how a worker can use the 3D or 2D view to adjust a cuboid.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/object_tracking/view-options-worker-ui.gif)


When you are in the worker UI, you see the following menus:
+ **Instructions** – Review these instructions before starting your task.
+ **Shortcuts** – Use this menu to view keyboard shortcuts that you can use to navigate the point cloud and use the annotation tools provided. 
+ **Label** – Use this menu to modify a cuboid. First, select a cuboid, and then choose an option from this menu. This menu includes assistive labeling tools like setting a cuboid to the ground and automatically fitting the cuboid to the object's boundaries. 
+ **View** – Use this menu to toggle different view options on and off. For example, you can use this menu to add a ground mesh to the point cloud, and to choose the projection of the point cloud.
+ **3D Point Cloud** – Use this menu to add additional attributes to the points in the point cloud, such as color, and pixel intensity. Note that these options may not be available.

When you open a task, the move scene icon is on, and you can move around the point cloud using your mouse and the navigation buttons in the point cloud area of the screen. To return to the original view you see when you first opened the task, choose the reset scene icon. 

After you select the add cuboid icon, you can add cuboids to the point cloud and images (if included). You must select the move scene icon again to move to another area in the 3D point cloud or image. 

To collapse all panels on the right and make the 3D point cloud full-screen, choose the full screen icon. 

If camera images are included, you may have the following view options:
+ **C** – View the camera angle on point cloud view.
+ **F** – View the frustum, or field of view, of the camera used to capture that image on point cloud view. 
+ **P** – View the point cloud overlaid on the image.
+ **B** – View cuboids in the image. 

The following video demonstrates how to use these view options. The **F** option is used to view the field of view of the camera (the gray area), the **C** options shows the direction the camera is facing and angle of the camera (blue lines), and the **B** option is used to view the cuboid. 

![\[Gif showing how to use various view options.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/gifs/view-options-side.gif)


### Delete Cuboids
<a name="sms-point-cloud-instructions-ot-delete"></a>

You can select a cuboid or label ID and:
+ Delete an individual cuboid in the current frame you are viewing.
+ Delete all cuboids with that label ID before or after the frame you are viewing.
+ Delete all cuboids with that label ID in all frames. 

A common use-case for cuboid deletion is if the object leaves the scene.

You can use one or more of these options to delete both manually placed and interpolated cuboids with the same label ID.
+ To delete all cuboids before or after the frame you are currently on, select the cuboid, select the **Label** menu item at the top of the UI and then select one of **Delete in previous frames** or **Delete in next frames**. Use the Shortcuts menu to see the shortcut keys you can use for these options.
+ To delete a label in all frames, select **Delete in all frames** from the **Labels** menu, or use the shortcut **Shift \$1 Delete** on your keyboard.
+ To delete an individual cuboid from a single frame, select the cuboid and either select the trashcan icon (![\[Trash can icon representing deletion or removal functionality.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/label-icons/delete.png)) next to that label ID in the **Label ID** sidebar on the right or use the Delete key on your keyboard to delete that cuboid.

If you have manually placed more than one cuboid with the same label in different frames, when you delete one of the manually placed cuboids, all interpolated cuboids adjust. This adjustment happens because the UI uses manually placed cuboids as anchor points when calculating the location of interpolated cuboid. When you remove one of these anchor points, the UI must recalculate the position of interpolated cuboids.

If you delete a cuboid from a frame, but later decide that you want to get it back, you can use the **Duplicate to previous frames** or **Duplicate to next frames** options in the **Label** menu to copy the cuboid into all the previous or all of the following frames, respectively.

## Bulk Edit Label Category and Frame Attributes
<a name="sms-point-cloud-worker-instructions-ot-bulk-edit"></a>

You can bulk edit label attributes and frame attributes. 

When you bulk edit an attribute, you specify one or more ranges of frames that you want to apply the edit to. The attribute you select is edited in all frames in that range, including the start and end frames you specify. When you bulk edit label attributes, the range you specify *must* contain the label that the label attribute is attached to. If you specify frames that do not contain this label, you will receive an error.

To bulk edit an attribute you *must* specify the desired value for the attribute first. For example, if you want to change an attribute from *Yes* to *No*, you must select *No*, and then perform the bulk edit. 

You can also specify a new value for an attribute that has not been filled in and then use the bulk edit feature to fill in that value in multiple frames. To do this, select the desired value for the attribute and complete the following procedure. 

**To bulk edit a label or attribute:**

1. Use your mouse to right click the attribute you want to bulk edit.

1. Specify the range of frames you want to apply the bulk edit to using a dash (`-`) in the text box. For example, if you want to apply the edit to frames one through ten, enter `1-10`. If you want to apply the edit to frames two to five, eight to ten and twenty enter `2-5,8-10,20`.

1. Select **Confirm**.

If you get an error message, verify that you entered a valid range and that the label associated with the label attribute you are editing (if applicable) exists in all frames specified.

You can quickly add a label to all previous or subsequent frames using the **Duplicate to previous frames** and **Duplicate to next frames** options in the **Label** menu at the top of your screen. 

## Icon Guide
<a name="sms-point-cloud-worker-instructions-ot-icons"></a>

Use this table to learn about the icons you see in your worker task portal. 


| Icon | Name | Description | 
| --- | --- | --- | 
|  ![\[The Add cuboid icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/add_cuobid.png)  |  add cuboid  |  Choose this icon to add a cuboid. Each cuboid you add is associated with the category you chose.   | 
|  ![\[The Edit cuboid icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/edit_cuboid.png)  |  edit cuboid  |  Choose this icon to edit a cuboid. After you add a cuboid, you can edit its dimensions, location, and orientation. After a cuboid is added, it automatically switches to edit cuboid mode.   | 
|  ![\[The Ruler icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/Ruler_icon.png)  |  ruler  |  Use this icon to measure distances, in meters, in the point cloud. You may want to use this tool if your instructions ask you to annotate all objects in a given distance from the center of the cuboid or the object used to capture data. When you select this icon, you can place the starting point (first marker) anywhere in the point cloud by selecting it with your mouse. The tool will automatically use interpolation to place a marker on the closest point within threshold distance to the location you select, otherwise the marker will be placed on ground. If you place a starting point by mistake, you can use the Escape key to revert marker placement.  After you place the first marker, you see a dotted line and a dynamic label that indicates the distance you have moved away from the first marker. Click somewhere else on the point cloud to place a second marker. When you place the second marker, the dotted line becomes solid, and the distance is set.  After you set a distance, you can edit it by selecting either marker. You can delete a ruler by selecting anywhere on the ruler and using the Delete key on your keyboard.   | 
|  ![\[The Reset scene icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/fit_scene.png)  |  reset scene  | Choose this icon to reset the view of the point cloud, side panels, and if applicable, all images to their original position when the task was first opened.  | 
|  ![\[The Move scene icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/move_scene.png)  |  move scene  |  Choose this icon to move the scene. By default, this icon is chosen when you first start a task.   | 
|  ![\[The Full screen icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/fullscreen.png)  |  full screen   |  Choose this icon to make the 3D point cloud visualization full screen and to collapse all side panels.  | 
|  ![\[The Load frames icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/load_screen.png)  |  load frames  |  Choose this icon to load additional frames.   | 
|  ![\[The Hide labels icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/label-icons/hide.png)  | hide labels |  Hide labels in the 3D point cloud visualization, and if applicable, in images.   | 
|  ![\[The Show labels icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/label-icons/show.png)  | show labels |  Show labels in the 3D point cloud visualization, and if applicable, in images.   | 
|  ![\[The Delete labels icon.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/pointcloud/icons/label-icons/delete.png)  | delete labels |  Delete a label. This option can only be used to delete labels you have manually created or adjusted.   | 

## Shortcuts
<a name="sms-point-cloud-worker-instructions-ot-hot-keys"></a>

The shortcuts listed in the **Shortcuts** menu can help you navigate the 3D point cloud and use tools to add and edit cuboids. 

Before you start your task, it is recommended that you review the **Shortcuts** menu and become acquainted with these commands. You need to use some of the 3D cuboid controls to edit your cuboid. 

## Release, Stop and Resume, and Decline Tasks
<a name="sms-point-cloud-worker-instructions-skip-reject-ot"></a>

When you open the labeling task, three buttons on the top right allow you to decline the task (**Decline task**), release it (**Release task**), and stop and resume it at a later time (**Stop and resume later**). The following list describes what happens when you select one of these options:
+ **Decline task**: You should only decline a task if something is wrong with the task, such as an issue with the 3D point clouds, images or the UI. If you decline a task, you will not be able to return to the task.
+ **Release Task**: Use this option to release a task and allow others to work on it. When you release a task, you loose all work done on that task and other workers on your team can pick it up. If enough workers pick up the task, you may not be able to return to it. When you select this button and then select **Confirm**, you are returned to the worker portal. If the task is still available, its status will be **Available**. If other workers pick it up, it will disappear from your portal. 
+ **Stop and resume later**: You can use the **Stop and resume later** button to stop working and return to the task at a later time. You should use the **Save** button to save your work before you select **Stop and resume later**. When you select this button and then select **Confirm**, you are returned to the worker portal, and the task status is **Stopped**. You can select the same task to resume work on it.

  Be aware that the person that creates your labeling tasks specifies a time limit in which all tasks much be completed by. If you do not return to and complete this task within that time limit, it will expire and your work will not be submitted. Contact your administrator for more information. 

## Saving Your Work and Submitting
<a name="sms-point-cloud-worker-instructions-saving-work-ot"></a>

You should periodically save your work. Ground Truth will automatically save your work ever 15 minutes. 

When you open a task, you must complete your work on it before pressing **Submit**. 