

# What is Amazon Mechanical Turk?
<a name="WhatIs"></a>

Amazon Mechanical Turk (Mechanical Turk) is a crowdsourcing marketplace that connects you with an on-demand, scalable, human workforce to complete tasks. Using Mechanical Turk, you can programmatically direct tasks to the Mechanical Turk marketplace, where they can be completed by workers around the world. Mechanical Turk allows you to access the intelligence, skills, and insights of a global workforce for tasks as varied as data categorization, moderation, data collection and analysis, behavioral studies, and image annotation.

Mechanical Turk is built around the concept of *microtasks,* which are small, atomic tasks that workers can complete in their web browser. When you submit work to Mechanical Turk, you typically start by breaking it into smaller tasks on which workers can work independently. In this way, a project involving categorizing 10,000 images becomes 10,000 individual microtasks that workers can complete. By breaking tasks down atomically, hundreds of workers can work on portions of your project at the same time, which increases how quickly the work can be completed. In addition, you can specify that each task be completed by multiple workers to allow you to check for quality or identify biases in subjective questions.

**Important**  
If you do not add a CORS configuration to the Amazon S3 buckets that contain your image input data, HITs that you create using those input images will fail. To learn more, see [CORS configuration requirement](MturkCorsConfig.md).

Use this guide to learn how you can interact with Mechanical Turk programatically. We recommend you begin by reading the following topics. To get started quickly with Mechanical Turk, see [Get Started with Amazon Mechanical Turk](GetStartedMturk.md).

**Topics**
+ [The Amazon Mechanical Turk marketplace](IntroMarketplace.md)
+ [Creating tasks that work well on Amazon Mechanical Turk](IntroTaskWorkWellMturk.md)
+ [Amazon Mechanical Turk core concepts](IntroCoreConcepts.md)
+ [Amazon Mechanical Turk best practices](IntroBestPractices.md)
+ [Frequently asked questions](IntroFAQ.md)

# The Amazon Mechanical Turk marketplace
<a name="IntroMarketplace"></a>

Mechanical Turk uses the *requester* and *worker* terms to describe the two participants in the marketplace. When you post new tasks to Amazon Mechanical Turk (Mechanical Turk), you are a *requester* asking *workers* to complete your tasks in exchange for the reward amount you offer. Workers can go to the [Mechanical Turk marketplace](https://worker.mturk.com) to find and accept tasks. 

As shown in the following image of the marketplace website, workers can see a list of available tasks, along with details about each task. Workers can review the title and description, reward amount, and time allotted to complete each task before accepting and working on it. In many cases, workers preview a task prior to accepting it, which allows them to decide if they want to work on it.

![\[Amazon Mechanical Turk worker interface showing available HITs with details and options to preview or accept tasks.\]](http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkRequester/images/mturk_marketplace.png)


Submitting tasks to the Mechanical Turk marketplace does not guarantee that workers will complete them. If workers don't believe that the reward amount is reasonable for the effort required, or the work isn't something on which they want to work, they skip it and move on to other tasks. For this reason, we recommended that you put thought into how you describe your task so that workers can make an informed decision.

After workers complete your task, they submit their response and move on to additional tasks that you've posted or tasks from other requesters. You can review a worker's submission shortly after they submit the task. You have the option to *approve* or *reject* their submission. If you approve the work, the reward amount is distributed to the worker. Note that if you neither approve nor reject a task submission, it is automatically approved after a set time. 

## Marketplace rules
<a name="IntroMarketplaceRules"></a>

Prior to submitting tasks to Mechanical Turk, you should review the [Acceptable Use Policy](https://www.mturk.com/acceptable-use-policy) to ensure that your task adheres to the rules of the marketplace. Prohibited uses cover a range of activities such as violating the privacy or security of workers or others, abusive behavior, or any illegal activities. Violating these policies results in removal of your tasks from the Mechanical Turk marketplace and may result in the suspension of your account. 

## The sandbox marketplace
<a name="IntroMarketplaceSandbox"></a>

To experiment with Mechanical Turk without spending money on the Mechanical Turk marketplace, you can use the [*sandbox* environment for requestors](https://requestersandbox.mturk.com) and [the one for workers](https://workersandbox.mturk.com). This is a mirror image of the *production* environment, but no money changes hands when work is completed. Many requesters create tasks here first and complete them themselves so that they can validate their task interface and ensure they get the results they expect back. You can find more information on using the sandbox in [Using the sandbox](mturk-use-sandbox.md).

Note that there is no financial incentive to complete work in the sandbox marketplace, so you shouldn't expect tasks you post in the sandbox to be completed unless you do so yourself. 

# Creating tasks that work well on Amazon Mechanical Turk
<a name="IntroTaskWorkWellMturk"></a>

Amazon Mechanical Turk (Mechanical Turk) can be used for an exceptionally wide range of tasks. Tasks that work well on Mechanical Turk generally meet the following criteria: 
+  Can be completed from within a web browser 
+  Can be broken into distinct, bite-sized tasks 
+  Can support clear instructions and outcomes 

Most tasks that meet these criteria can be completed on Mechanical Turk, assuming you provide workers with a task interface that allows them to successfully perform the task. You should also keep in mind that Mechanical Turk workers excel at tasks that rely on general human knowledge and skills. While some workers have specialized experience such as legal or medical backgrounds, most do not. As a result, while Mechanical Turk can enable tasks such as labeling the location of people or animals in images, you are likely to have less success asking workers to apply expertise that would be associated with a radiologist. 

Note that tasks must also conform to the rules in the [Mechanical Turk Acceptable Use Policy](https://www.mturk.com/acceptable-use-policy). Prohibited uses cover a range of activities such as violating the privacy or security of workers or others, abusive behavior, or any illegal activities. 

## Tasks can be completed within a web browser
<a name="IntroTaskWorkWellMturkBrowser"></a>

Mechanical Turk tasks are built using HTML and presented to workers via the Mechanical Turk website. Most workers complete tasks on their computer without the need to use other devices or specialized software. Tasks that require workers to visit physical locations or leverage other devices aren't recommended. 

## Work can be broken into distinct, bite-sized tasks
<a name="IntroTaskWorkWellMturkInstructions"></a>

Most Mechanical Turk tasks take less than five minutes to complete and almost all can be completed within an hour. This lets workers try new tasks without needing to commit a lot of time. Most workers appreciate the flexibility that Mechanical Turk provides in moving from task to-task without being locked in for an extended period of time. 

 

## Task supports clear instructions and outcomes
<a name="IntroTaskWorkWellMturkSize"></a>

The most successful tasks on Mechanical Turk are those that provide the necessary information for a worker to imagine what a successful response would look like. Avoid tasks that are open-ended and could have multiple possible outcomes. For example, a task that asks workers to *identify all of the competitors of company X* would be frustrating for workers. By specifying that you want *all* competitors, workers are left wondering at what point they should draw a line and stop their research. It would also leave them wondering if you will reject their work if they aren't as comprehensive as you want them to be. In this example, you should instead be specific about the data that you need by describing your task as *identify the top 5 competitors of company X*. 

## Examples of common uses of Mechanical Turk
<a name="IntroTaskWorkWellMturkExamples"></a>

The following are examples of common Mechanical Turk use-cases:
+ *Audio transcription*: Transcribe an audio clip.
+ *Categorization*: Categorize products.
+ *Data collection*: Identify the website for a business. 
+ *Writing*: wWrite a description of a product based on an image and details. 
+ *Market research*: Complete a market research survey. 
+ *Rating*: Evaluate and rate the quality of an image. 
+ *Usability testing*: Visit a website and complete a set of steps, providing feedback on each step. 
+ *Research study*: Participate in a study by responding to questions surrounding a scenario. 
+ *Computer vision*: Draw bounding boxes around animals in images.
+ *Natural language processing*: Identify the named entities within a statement.
+ *Matching*: Review two data records and confirm they relate to the same business. 
+ *Moderation*: Evaluate a set of images and identify any that don't meet the provided criteria. 
+ *Ranking*: Rank a list of products based on their relevance to a search query. 
+ *Data extraction*: Extract the names and prices of products in a receipt. 
+ *Text transcription*: Transcribe handwritten text. 
+ *Video transcription*: Transcribe a video clip. 

# Amazon Mechanical Turk core concepts
<a name="IntroCoreConcepts"></a>

The following are the core concepts of Amazon Mechanical Turk (Mechanical Turk) that you need to understand to use it effectively. 

## Requesters and workers
<a name="IntroCoreConceptsRequesters"></a>

 A *requester* is a company, organization, or person that posts tasks (HITs) to Mechanical Turk for workers to perform. A *worker* is a person who performs the tasks specified by a tequester in a HIT. 

## Marketplace
<a name="IntroCoreConceptsMarketplace"></a>

 The [Mechanical Turk marketplace](https://worker.mturk.com) is where workers can go to find and accept tasks. In addition to the *production* marketplace, there is a second [*sandbox* marketplace](https://workersandbox.mturk.com) where requesters can post development tasks without money changing hands. 

More information can be found in Amazon Mechanical Turk marketplace. 

## Task or HIT
<a name="IntroCoreConceptsHits"></a>

The base unit of work in Mechanical Turk is called a *Human Intelligence Task*, which is typically designated as a *HIT* or *task*. A HIT represents a single, self-contained task, such as *Identify the color of the car in the photo*, that a requester submits to Mechanical Turk for workers to complete. 

Mechanical Turk is built around the concept of *microtasks,* which are small, atomic tasks that workers can complete in their web browser. When you submit work to Mechanical Turk, you typically start by breaking it into smaller tasks on which workers can work independently. In this way, a project involving categorizing 10,000 images becomes 10,000 individual microtasks that workers can complete. Hundreds of workers can work on portions of your project at the same time, which increases how quickly the work can be completed. In addition, you can specify that each task be completed by multiple workers to allow you to check for quality or identify biases in subjective questions. 

## Assignment
<a name="IntroCoreConceptsAssignment"></a>

When creating a HIT, you can specify how many workers can accept and complete each task. Doing so allows you to collect multiple responses for each item and then compare them. This additional information can be valuable in managing quality, as well as in collecting multiple data points when responses are subjective. 

When a worker accepts a HIT, Mechanical Turk creates an *assignment*, which belongs exclusively to the worker. The worker can submit results up until the expiration of the HIT. When retrieving results for a HIT, requesters retrieve all of the submitted assignments. 

## Reward and bonus
<a name="IntroCoreConceptsBonus"></a>

A *reward* is the money you, as a requester, pay workers for satisfactory work they do on your HITs. A *bonus* is the amount you award workers for high-quality performance. Rewards are transmitted to workers when assignment submissions are approved, either by approving the assignment or when the auto-approval threshold is reached. Bonuses can be sent to workers who have recently completed an assignment for you. 

## Qualifications
<a name="IntroCoreConceptsQuals"></a>

You can use *qualifications* to specify attributes of the workers eligible to work on your HITs. Qualifications can be either system-generated, such as qualifications based on location, or managed by you, based on past performance on your tasks. 

To learn more, see [Selecting eligible workers](SelectingEligibleWorkers.md).

# Amazon Mechanical Turk best practices
<a name="IntroBestPractices"></a>

Keep the following best practices in mind when you design and create your HITs.

## Allow workers to be as efficient as possible
<a name="iIntroBestPracticesEfficient"></a>

When you post tasks to Mechanical Turk, the reward amount you set is primarily for the worker's time and attention to your task. If your task interface is inefficient and requires multiple manual steps that require a lot of time, workers typically expect a higher reward amount to compensate for the time they need to spend performing those steps. Investing time to make your interface as efficient as possible pays dividends in higher accuracy and lower costs.

## Build tasks with family and friends in mind
<a name="iIntroBestPracticesFriendlyTasks"></a>

When building tasks, it’s a common mistake to assume that workers have the same knowledge you do about your area of expertise. Very few workers have the expertise you do and will likely be confused if you use highly technical language or make assumptions about their skills. A great practice is to design your task interface with a member of your family or a friend in mind. Could they complete your task successfully? If you're not sure, share the interface with them and see if they can complete it without any additional instructions from you.

## Include an optional feedback field
<a name="iIntroBestPracticesFeedback"></a>

Whenever possible, include an optional feedback field at the end of your task interface, particularly when working with a new interface. Workers appreciate the opportunity to provide feedback and often share insights on how to improve it.

## Test your HITs
<a name="IntroBestPracticesTest"></a>

Before posting your tasks to Mechanical Turk, it is always a good idea to take a few minutes to test your HITs to make sure they work as you expect. It allows you to validate that your interface does what you expect. Doing the task yourself also lets you get an idea of how long it takes to complete so that you can set an appropriate reward amount.

The easiest way to test your task interface is to save it to an HTML file and open it in a browser. From the browser, you can go through all of the steps that a worker would follow in completing the task. If your task interface is built around a standard form element, you won't be able to test submitting it, but can test to ensure it works as you expect. If you use the crowd-form element from [Crowd HTML Elements](mturk-hits-defining-questions-html-crowd-html-elements.md), you can test it by selecting **Submit**. When you submit from outside of Mechanical Turk, the results are displayed at the top of the window.

To fully test a task interface and the creation and retrieval of HITs, you can use the sandbox environment.

## Start small
<a name="IntroBestPracticesStartSmall"></a>

When you create or update a task interface, it's always best to start by posting a small number of HITs first to confirm that workers complete the task as you expect. It's a great way to understand how workers respond and gives you a chance to correct any issues before you post the remaining work. Nothing is worse than posting thousands of dollars of HITs, only to discover that the results are invalid because you made a mistake in your task interface.

## Keep HIT type attributes consistent
<a name="IntroBestPracticesAttributes"></a>

When you create a HIT, you provide a number of attributes about the task that tell Mechanical Turk how to display it in the marketplace. These are separate from the content and question of the task itself, and include the title, description, reward amount, and attributes describing how long the task remains active. These attributes comprise the HIT type for your task. Mechanical Turk automatically creates a HIT type when you first call [CreateHIT](https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_CreateHITOperation.html) with those values. When you create multiple HITs, Mechanical Turk attempts to find an existing HIT type in your account that has the same attributes and reuse it. If you change any of these attributes—even if they are small changes to the title or description—it will force Mechanical Turk to create a new HIT type with each change. 

Maintaining consistent attributes for your HIT type is important because it directly impacts how your HIT is displayed on the worker website. On the worker website, HITs are grouped together into HIT groups based on their HIT type values. As shown in the following image, each HIT group has thousands of HITs on which a worker can work because they all have the same attributes for title, description, reward, and other attributes. If workers accept a HIT from one of these HIT groups, they can automatically move to the next piece of work in the HIT group without needing to return to the list. 

![\[HIT Groups interface showing tasks with titles, rewards, and time allotted.\]](http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkRequester/images/mturk_accepted_HIT.png)


If, however, each HIT has a unique HIT type, then workers see your HITs as a long list of options in the list and have to return to the list after completing each task.

![\[Table showing HITs with identical titles, rewards, and creation dates for ad tagging tasks.\]](http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkRequester/images/mturk_ad_tagger.png)


## Specify that links open new browser, windows or tabs
<a name="IntroBestPracticesLinks"></a>

When you add links to your task HTML, you should include a *target* attribute to let the browser know that it should open a new window or tab when workers click on it. This keeps the worker interface active in the existing window and prevents issues that sometimes occur when workers use the **Back** button to return to the worker interface. Add the *\$1blank * target to direct the browser to open a new window, as shown in the following example.

```
<a href="https://www.amazon.com" target="_blank">My link</a>
```

## Limit your use of worker blocks
<a name="IntroBestPracticesBlocks"></a>

We recommend that you be judicious in your use of worker blocks and only block those workers who are clearly not making an attempt to correctly respond to your task (spamming). If a worker is simply misreading instructions or lacks the requisite skills to complete your task successfully, we advise you to use a custom qualification to exclude them from future tasks, rather than a block. Because the blocks a worker receives are a component of Mechanical Turk worker review policies, and frequent blocks may result in account suspension, workers are sensitive to being blocked by requesters. If the worker community believes that you are blocking workers unfairly, they may choose to avoid accepting your tasks in the future.

## Include clear reasons for rejections and blocks
<a name="IntroBestPracticesRejection"></a>

Workers take a lot of pride in the quality of their work and pay close attention to rejections and blocks they receive. When you decide to reject an assignment or block a worker, be as clear as possible about the reasons for the action. Simply providing a value such *incorrect* as the reason gives the worker no information they can use to improve in the future. Instead, be clear about what the worker did incorrectly. This allows workers to correct their mistakes in future tasks. 

# Frequently asked questions
<a name="IntroFAQ"></a>

Use the following sections to get answers to frequently asked questions. If you need additional support, use the following link to contact Amazon Mechanical Turk: [www.mturk.com/contact-us](http://www.mturk.com/contact-us).

## Why aren't my tasks being completed?
<a name="intro-faq-task-completion"></a>

There are a number of reasons why the tasks you post to Mechanical Turk aren't being completed. The most common reason is that the reward amount you specified isn't adequate to compensate workers for the time and effort they need to commit to your task to complete it. If you suspect this is the case, remove the HITs from Mechanical Turk by expiring them and experiment with reposting some of them at a higher reward amount. 

Other common reasons include the following. 
+ The qualification requirements for the task are so narrow that few, if any, workers meet the criteria to be eligible for the task. 
+ The task interface has a technical issue that prevents workers from submitting it. 
+ The assignment duration is set too short for workers to successfully complete the task in the time allowed. 

## How do I pull down HITs I created by mistake?
<a name="IntroFAQRemoveHits"></a>

Use the [UpdateExpirationForHIT](https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_UpdateExpirationForHITOperation.html) operation and set the `ExpireAt` time to `0` to tell Mechanical Turk to immediately expire a HIT. Note that this won't prevent workers that have already accepted your HIT from completing and submitting it. 

## I expired my HITs. Why am I still getting submissions from workers?
<a name="IntroFAQHitsExpired"></a>

If a worker accepts a HIT before it expires, they are still allowed to complete and submit the task until the assignment duration elapses. This protects the worker experience by letting them submit work on which they may have already spent a lot of time, even if you opt to take the HIT down. 

## Why are some of my task fields missing from my results?
<a name="IntroFAQFields"></a>

A common mistake in building task interfaces is using the same *name* attribute for multiple form inputs. In those cases, only one of the input field values is returned. You should check your HTML to ensure that each input has a unique name. 

## Can I make some fields in my task interface required?
<a name="IntroFAQFieldsRequired"></a>

You can use HTML, JavaScript, or both to specify required fields and minimum or maximum values or perform other validations that prevent workers from submitting the task if it doesn't meet the requirements. To learn more about the types of form validation you can apply, see [Client-side form validation](https://developer.mozilla.org/en-US/docs/Learn/Forms/Form_validation) on the Mozilla developer site. 

## How can I test my task interface?
<a name="IntroFAQTestUI"></a>

The easiest way to test your task interface is to save it to an HTML file and open it in a browser. From the browser, you can go through all of the steps that a worker would perform in completing the task. If your task interface is built around a standard form element, you can't test submitting it, but you can test to ensure it works as you expect. If you use the crowd-form element from Crowd HTML Elements, you can test it by selecting **Submit**. When you submit from outside of Mechanical Turk, the results are displayed at the top of the window. 

To fully test a task interface and the creation and retrieval of HITs, you can use the sandbox environment. 

## What is the difference between a HIT and an assignment?
<a name="IntroFAQAssignmentDiff"></a>

A *HIT* is a single task that you create in Mechanical Turk. When workers accept a HIT, they get an *assignment* that gives them the right to submit their response. When you create a HIT, you can specify the maximum number of assignments that can be created for each HIT, which allows you to get multiple different worker responses for each task. For more information on HITs and assignments, see [Amazon Mechanical Turk core concepts](IntroCoreConcepts.md).

## Can I view the HITs I create with the API in the requester website?
<a name="IntroFAQViewHits"></a>

No, the requester website only displays HITs that are created from the requester website. 

## I published HITs in the sandbox environment. Why they aren't being completed?
<a name="IntroFAQSandbox"></a>

The sandbox environment is a great way to test HITs without spending any money. However, because no money changes hands, there isn't any incentive for workers to complete your tasks. To complete your testing, create an account in the [worker sandbox environment](https://workersandbox.mturk.com) to complete the tasks yourself. Then, publish them in the *production* environment. 

## I incorrectly rejected some assignments. Can I reverse the rejection?
<a name="IntroFAQRevertReject"></a>

In the event that you reject an assignment but then discover that the issue was not the worker's fault, you can call [ApproveAssignment](https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_ApproveAssignmentOperation.html) to reverse the rejection, but only for assignments submitted in the last 30 days that haven't been deleted. 

## How do I filter the workers eligible to work on my task?
<a name="IntroFAQFilterWorkers"></a>

Mechanical Turk provides a qualifications system that allows you to use system-managed or custom criteria to limit the workers that can work on a task. For more information, see [Selecting eligible workers](SelectingEligibleWorkers.md). 

## How do I create a custom qualification?
<a name="IntroFAQCustomQual"></a>

You can create custom qualification types that allow you to filter workers eligible to work on your tasks using criteria based on their past performance on your tasks. For more information, see [Working with custom qualification types](WorkWithCustomQualType.md). 

## Can I restrict how many HITs a worker can complete for my project?
<a name="IntroFAQRestrictWorkers"></a>

Mechanical Turk doesn't provide a native capability to limit the number of HITs that a worker can contribute to a project or batch. To learn how to accomplish this using custom qualification types, see [Working with custom qualification types](WorkWithCustomQualType.md). Before starting your project or batch, create a custom qualification type with a label such as **Completed Enough of Project A** and pecify that this type doesn't exist (**DoesNotExist**) in your qualification requirements for each HIT. When a worker reaches your threshold of HITs they can submit, you can assign this qualification type to them, after which they can't accept any HITs for the project. 

## Can I post HITs in languages other than English?
<a name="IntroFAQLanguageHits"></a>

You can post HITs using any language, provided you note the required language in the title of your task. However, the number of available workers who are fluent in a given language varies greatly. It may take longer for your task to be completed or you may need to increase your reward amount if not enough workers are available. 

## Additional Mechanical Turk Resources
<a name="IntroFAQAdditionalResources"></a>
+ The [Mechanical Turk API Reference](https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/index.html) describes all the API operations for Mechanical Turk in detail.
+ The [Mechanical Turk Requester User Interface Documentation](https://docs.aws.amazon.com/AWSMechTurk/latest/RequesterUI/index.html) describes how to create Mechanical Turk tasks using a graphical user interface. 
+ Posts on the [Mechanical Turk Happenings Blog](https://blog.mturk.com/) address updates to the Mechanical Turk marketplace. 
+ [Blog Tutorials](https://blog.mturk.com/tutorials/home) provide instruction on using Mechanical Turk for a variety of tasks. 
+ The [Amazon Mechanical Turk Developer Forums](https://developer.amazonwebservices.com/connect/forum.jspa?forumID=11) provide questions and answers about Mechanical Turk. 
+ [Mechanical Turk on Github](https://github.com/awslabs/mturk-api-samples) offers sample code and tutorials.