

# Identifying Your Amazon Textract Use Case
Identifying Your Use Case

Amazon Textract offers a variety of operations that apply to different documents. Below is a list of the operations you can perform with Amazon Textract and links to further information on each use case.
+ Detecting text only. For more information, see [Detecting Text](how-it-works-detecting.md).
+ Detecting and analyzing relationships between text. For more information, see [Analyzing Documents](how-it-works-analyzing.md).
+ Detecting and analyzing text in invoices and receipts. For more information, see [Analyzing Invoices and Receipts](invoices-receipts.md).
+ Detecting and analyzing text in government identity documents. For more information, see [Analyzing Identity Documents](how-it-works-identity.md).
+ Detecting and analyzing text in lending documents. For more information, see [Analyzing Lending Documents](lending-document-classification-extraction.md).

Amazon Textract provides you with synchronous operations for processing single-page documents with near real-time responses. For more information, see [Processing Documents Synchronously](sync.md). Amazon Textract also provides asynchronous operations that you can use to process larger, multipage documents. Asynchronous responses aren't in real time. For more information, see [Processing Documents Asynchronously](async.md). 

Amazon Textract provides you with a workflow to automatically classify lending document pages and route them to existing solutions. For more information see [Analyzing Lending Documents](lending-document-classification-extraction.md).

 Amazon Textract lets you customize the output of its pretrained Queries feature. With Amazon Textract Custom Queries, you can use your own documents and train an adapter to customize the base model, keeping complete control over your proprietary documents. See [Customizing your Queries Responses](textract-using-adapters.md) for more information. 

 For information regarding the results returned by Analyze Lending, see [Analyze Lending Response Objects](lending-response-objects.md).

**Topics**
+ [

# Detecting Text
](how-it-works-detecting.md)
+ [

# Analyzing Documents
](how-it-works-analyzing.md)
+ [

# Analyzing Invoices and Receipts
](invoices-receipts.md)
+ [

# Analyzing Identity Documents
](how-it-works-identity.md)
+ [

# Analyzing Lending Documents
](lending-document-classification-extraction.md)
+ [

# Customizing Outputs
](how-it-works-custom-queries.md)

# Detecting Text


Amazon Textract provides synchronous and asynchronous operations that return only the text detected in a document. For both sets of operations, the following information is returned in multiple [Block](API_Block.md) objects:
+ The lines and words of detected text
+ The relationships between the lines and words of detected text
+ The page that the detected text appears on
+ The location of the lines and words of text on the document page

For more information, see [Lines and Words of Text](how-it-works-lines-words.md).

To detect text synchronously, use the [DetectDocumentText](API_DetectDocumentText.md) API operation, and pass a document file as input. The entire set of results is returned by the operation. For more information and an example, see [Processing Documents Synchronously](sync.md). 

**Note**  
The Amazon Rekognition API operation `DetectText` is different from `DetectDocumentText`. You use `DetectText` to detect text in live scenes, such as posters or road signs.

To detect text asynchronously, use [StartDocumentTextDetection](API_StartDocumentTextDetection.md) to start processing an input document file. To get the results, call [GetDocumentTextDetection](API_GetDocumentTextDetection.md). The results are returned in one or more responses from `GetDocumentTextDetection`. For more information and an example, see [Processing Documents Asynchronously](async.md). 

# Analyzing Documents


Amazon Textract analyzes documents and forms for relationships among detected text. Amazon Textract analysis operations return 5 categories of document extraction — text, forms, tables, query responses, and signatures. The analysis of invoices and receipts is handled through a different process, for more information see [Analyzing Invoices and Receipts](invoices-receipts.md).

**Text Extraction**  
The raw text extracted from a document. For more information, see [Lines and words of text](how-it-works-lines-words.md).

**Form Extraction**  
Form data is linked to text items extracted from a document. Amazon Textract represents form data as key-value pairs. 

In the following example, one of the lines of text detected by Amazon Textract is *Name: Jane Doe*. Amazon Textract also identifies a key (*Name:*) and a value (*Jane Doe*). For more information, see [Form data (Key-value pairs)](how-it-works-kvp.md).

*Name: Jane Doe*

*Address: 123 Any Street, Anytown, USA*

*Birth date: 12-26-1980*

Key-value pairs are also used to represent check boxes or option buttons (radio buttons) that are extracted from forms.

*Male:* ☑

For more information, see [Selection elements](how-it-works-selectables.md).

**Table Extraction**  
Amazon Textract can extract tables, table cells, the items within table cells, table titles and footers, and the type of table. Amazon Textract can also be programmed to return the results in a JSON, CSV, or TXT file.


| Name | Address | 
| --- | --- | 
|  Ana Carolina  |  123 Any Town  | 

For more information, see [Tables](how-it-works-tables.md). Selection elements can also be extracted from tables. For more information, see [Selection elements](how-it-works-selectables.md).

**Signatures in Document Analysis**  
Amazon Textract can detect the locations of signatures in text documents. These are returned as geometry objects with bounding boxes that provide the location of a signature on the page, alongside the confidence that a signature is in that location. If the signature feature is used by itself, Amazon Textract will return both signatures and standard text detection results. Signature detection can be used in conjunction with other feature types such as forms, tables, and queries. When using it with forms and tables, signatures can be detected as part of a key-value pair or within a table cell respectively.

**Queries in Document Analysis**  
When processing a document with Amazon Textract, you may add queries to your analysis to specify what information you need. This involves passing a question, such as "What is the customer's social security number?" to Amazon Textract. Amazon Textract will then find the information in the document for that question and return it in a response structure separate from the rest of the document's information. For more information about this response structure, see [Query Response Structures](queryresponse.md). For more information on best practices for query use, see [Best Practices for Queries](bestqueries.md). Queries can be processed alone, or in combination with any other `FeatureType`, such as Tables or Forms.

 Example Query: What is the customer’s SSN?

 Example Answer: 111-xx-333

For analyzed items, Amazon Textract returns the following in multiple [Block](API_Block.md) objects:
+ The lines and words of detected text
+ The content of detected items
+ The relationship between detected items
+ The page that the item was detected on
+ The location of the item on the document page

**Custom Queries**  
With Amazon Textract document analysis, you can customize the model output through adapters trained on your own documents. Adapters are components that plug in to the Amazon Textract pre-trained deep learning model, customizing its output for your business specific documents. You create an adapter for your specific use case by annotating/labeling your sample documents and training the adapter on the annotated samples.

After you create an adapter, Amazon Textract provides you with an AdapterId. You can have multiple adapter versions within a single adapter. You can provide the AdapterId, along with an AdapterVersion, to an operation to specify that you want to use the adapter that you created. For example, you provide the two parameters to the [AnalyzeDocument](API_AnalyzeDocument.md) API for synchronous document analysis, or the [StartDocumentAnalysis](API_StartDocumentAnalysis.md) operation for asynchronous analysis. Providing the AdapterId as part of the request will automatically integrate the adapter into the analysis process and use it to enhance predictions for your documents. This way, you can leverage the capabilities of AnalyzeDocument while customizing the model to fit your own use case. 

For more information on creating and using adapters, see [Customizing your Queries Responses](textract-using-adapters.md). For a tutorial on how to create, train, and use adapters with the AWS Management Console, see [Custom Queries tutorial](textract-adapters-tutorial.md).

**Layout in Document Analysis**  
Amazon Textract can be used to detect the layout of a document by finding the locations of different elements and their associated lines of text. These elements are paragraphs, lists, headers, footers, page numbers, figures, tables, titles, and section headers. When analyzing the layout of a document, Amazon Textract returns a bounding box location of the layout elements as well as the text in those elements. This information is returned in the implied reading order of the document, listing elements from top to bottom, left to right.

You can use synchronous or asynchronous operations to analyze text in a document. To analyze text synchronously, use the [AnalyzeDocument](API_AnalyzeDocument.md) operation, and pass a document as input. `AnalyzeDocument` returns the entire set of results. For more information, see [Analyzing Document Text with Amazon Textract](analyzing-document-text.md). 

To detect text asynchronously, use [StartDocumentAnalysis](API_StartDocumentAnalysis.md) to start processing. To get the results, call [GetDocumentAnalysis](API_GetDocumentAnalysis.md). The results are returned in one or more responses from `GetDocumentAnalysis`. For more information and an example, see [Detecting or Analyzing Text in a Multipage Document](async-analyzing-with-sqs.md). 

To specify which type of analysis to perform, you can use the `FeatureTypes` list input parameter. Add TABLES to the list to return information about the tables that are detected in the input document—for example, table cells, cell text, and selection elements in cells. Add FORMS to return word relationships, such as key-value pairs and selection elements. Add QUERIES to specify information you want Amazon Textract to look for in the document and get a response back in the form of a question-answer pair. Add LAYOUT to determine the layout of the document. To perform all types of analysis, add TABLES, FORMS, QUERIES, and LAYOUT to `FeatureTypes`. 

All lines and words that are detected in the document are included in the response (including text not related to the value of `FeatureTypes`).

# Analyzing Invoices and Receipts


Amazon Textract extracts relevant data such as vendor and receiver contact information, from almost any invoice or receipt without the need for any templates or configuration. Invoices and receipts often use various layouts, making it difficult and time-consuming to manually extract data at scale. Amazon Textract uses ML to understand the context of invoices and receipts. It automatically extracts data such as invoice or receipt date, invoice or receipt number, item prices, total amount, and payment terms.

Amazon Textract also identifies vendor names that are critical for your workflows but may not be explicitly labeled. For example, Amazon Textract can find the vendor name on a receipt even if it's only indicated within a logo at the top of the page without an explicit key-value pair combination. 

Amazon Textract also makes it easy for you to consolidate input from diverse receipts and invoices that use different words for the same concept. For example, Amazon Textract maps relationships between field names in different documents such as bill number, invoice number, receipt number, outputting standard taxonomy as `INVOICE_RECEIPT_ID`. In this case, Amazon Textract represents data consistently across different document types. The address fields are categorized as 'receiver', 'supplier', 'vendor', 'bill to', 'ship to', and 'remit to'. When expense documents do not have unique values for each of these categories, Amazon Textract will return only the categories with unique values. 

 Fields that do not align with the standard taxonomy are categorized as `OTHER`. 

Following is a list of standard fields supported by expense analysis operations.

## List of Expense Analysis Standard Fields

+ Invoice Receipt Date — `INVOICE_RECEIPT_DATE`
+ Invoice Receipt ID — `INVOICE_RECEIPT_ID`
+ Invoice Tax Payer ID — `TAX_PAYER_ID`
+ Customer Number — `CUSTOMER_NUMBER`
+ Account Number — `ACCOUNT_NUMBER`
+ Vendor Name — `VENDOR_NAME`
+ Receiver Name — `RECEIVER_NAME`
+ Vendor Address — `VENDOR_ADDRESS`
+ Receiver Address — `RECEIVER_ADDRESS`
+ Order Date — `ORDER_DATE`
+ Due Date — `DUE_DATE`
+ Delivery Date — `DELIVERY_DATE`
+ PO Number — `PO_NUMBER`
+ Payment Terms — `PAYMENT_TERMS`
+ Total — `TOTAL`
+ Amount Due — `AMOUNT_DUE`
+ Amount Paid — `AMOUNT_PAID`
+ Subtotal — `SUBTOTAL`
+ Tax — `TAX`
+ Service Charge — `SERVICE_CHARGE`
+ Gratuity — `GRATUITY`
+ Prior Balance — `PRIOR_BALANCE`
+ Discount — `DISCOUNT`
+ Shipping and Handling Charge — `SHIPPING_HANDLING_CHARGE`
+ Vendor ABN Number — `VENDOR_ABN_NUMBER`
+ Vendor GST Number — `VENDOR_GST_NUMBER`
+ Vendor PAN Number — `VENDOR_PAN_NUMBER`
+ Vendor VAT Number — `VENDOR_VAT_NUMBER`
+ Receiver ABN Number — `RECEIVER_ABN_NUMBER`
+ Receiver GST Number — `RECEIVER_GST_NUMBER`
+ Receiver PAN Number — `RECEIVER_PAN_NUMBER`
+ Receiver VAT Number — `RECEIVER_VAT_NUMBER`
+ Vendor Phone — `VENDOR_PHONE`
+ Receiver Phone — `RECEIVER_PHONE`
+ Vendor URL — `VENDOR_URL`
+ Line Item/Item Description — `ITEM`
+ Line Item/Quantity — `QUANTITY`
+ Line Item/Total Price — `PRICE`
+ Line Item/Unit Price — `UNIT_PRICE`
+ Line Item/ProductCode — `PRODUCT_CODE`
+ Address (Bill To, Ship To, Remit To, Supplier) — `ADDRESS`
+ Name (Bill To, Ship To, Remit To, Supplier) — `NAME`
+ Core Address (Vendor, Receiver, Bill To, Ship To, Remit To, Supplier) — `ADDRESS_BLOCK`
+ Street Address (Vendor, Receiver, Bill To, Ship To, Remit To, Supplier) — `STREET`
+ City (Vendor, Receiver, Bill To, Ship To, Remit To, Supplier) — `CITY`
+ State (Vendor, Receiver, Bill To, Ship To, Remit To, Supplier) — `STATE`
+ Country (Vendor, Receiver, Bill To, Ship To, Remit To, Supplier) — `COUNTRY`
+ ZIP Code (Vendor, Receiver, Bill To, Ship To, Remit To, Supplier) — `ZIP_CODE`

The AnalyzeExpense API returns the following elements for a given document page:
+ The number of receipts or invoices within a document represented as `ExpenseIndex`
+ The standardized name for individual fields represented as `Type`
+ The actual name of the field as it appears on the document, represented as `LabelDetection`
+ The value of the corresponding field represented as `ValueDetection`
+ The number of pages within the submitted document represented as `Pages`
+ The page number on which the field, value, or line items are detected, represented as `PageNumber`
+ The geometry, which includes the bounding box and coordinates location of the individual field, value, or line items on the page, represented as `Geometry`
+ The confidence score associated with each piece of data detected on the document, represented as `Confidence`
+ The entire row of individual line items purchased, represented as `EXPENSE_ROW`

The following is a portion of the API output for a receipt processed by AnalyzeExpense that shows the Total: \$155.64 in the document extracted as standard field `TOTAL`. Actual text on the document appears as “Total,” Confidence Score as “97.1,” Page Number as “1,” and the total value as “\$155.64.” This also includes the bounding box and polygon coordinates: 

```
{
    "Type": {
        "Text": "TOTAL",
        "Confidence": 99.94717407226562
    },
    "LabelDetection": {
        "Text": "Total:",
        "Geometry": {
            "BoundingBox": {
                "Width": 0.09809663146734238,
                "Height": 0.0234375,
                "Left": 0.36822840571403503,
                "Top": 0.8017578125
            },
            "Polygon": [
                {
                    "X": 0.36822840571403503,
                    "Y": 0.8017578125
                },
                {
                    "X": 0.466325044631958,
                    "Y": 0.8017578125
                },
                {
                    "X": 0.466325044631958,
                    "Y": 0.8251953125
                },
                {
                    "X": 0.36822840571403503,
                    "Y": 0.8251953125
                }
        ]
    },
    "Confidence": 97.10792541503906
},
    "ValueDetection": {
        "Text": "$55.64",
        "Currency": {
            "Code": USD
        }
        "Geometry": {
            "BoundingBox": {
                "Width": 0.10395314544439316,
                "Height": 0.0244140625,
                "Left": 0.66837477684021,
                "Top": 0.802734375
            },
            "Polygon": [
                {
                    "X": 0.66837477684021,
                    "Y": 0.802734375
                },
                {
                    "X": 0.7723279595375061,
                    "Y": 0.802734375
                },
                {
                    "X": 0.7723279595375061,
                    "Y": 0.8271484375
                },
                {
                    "X": 0.66837477684021,
                    "Y": 0.8271484375
                }
            ]
        },
    "Confidence": 99.85165405273438
},
"PageNumber": 1
}
```

You can use synchronous operations to analyze an invoice or receipt. To analyze these documents, you use the AnalyzeExpense operation and pass a receipt or invoice to it. `AnalyzeExpense` returns the entire set of results. For more information, see [Analyzing Invoices and Receipts with Amazon Textract](analyzing-document-expense.md).

To analyze invoices and receipts asynchronously, use [StartExpenseAnalysis](API_StartExpenseAnalysis.md) to start processing an input document file. To get the results, call [GetExpenseAnalysis](API_GetExpenseAnalysis.md). The results for a given call to [StartExpenseAnalysis](API_StartExpenseAnalysis.md) are returned by `GetExpenseAnalysis`. For more information and an example, see [Processing Documents Asynchronously](async.md). 

# Analyzing Identity Documents


 Amazon Textract can extract relevant information from passports, driver licenses, and other identity documentation issued by the US Government using the AnalyzeID API. With Analyze ID, businesses can quickly, and accurately extract information from IDs such as US driver licenses, and passports that have different template or format. AnalyzeID API returns three categories of data types: 
+  Key-value pairs available on ID such as Date of Birth, Date of Issue, ID \$1, Class, and Restrictions. 
+  Implied fields on the document that may not have explicit keys associated with them such as Name, Address, and Issued By. 
+  The text of the document, the same as would be returned by document text detection.

Key names are standardized within the response. For example, if your driver license says LIC\$1 (license number) and passport says Passport No, Analyze ID response will return the standardized key as “Document ID” along with the raw key (such as LIC\$1). This standardization lets customers combine information across many IDs that use different terms for the same concept.

![\[Descriptive text here: A mock driver's license from the state of Massachusetts. The name of the individual who owns the license is Maria Garcia. The ISS field has a value of 03/18/2018. The Number field has a value of 736HDV7874JSB. The EXP field has a value of 01/20/2028. The DOB field has a value of 03/18/2001. The CLASS field has a value of D. The REST field is NONE. The END field is NONE. The address on the ID is 100 Market Street, Bigtown, MA, 02801. The EYES field is BLK, the SEX field is F, the HGT field is 4-6'', the DD field is 03/12/2019, and the REV field is 03/12/2017.\]](http://docs.aws.amazon.com/textract/latest/dg/images/passport2.png)


 Analyze ID returns information in the structures called `IdentityDocumentFields`. These are `JSON` structures containing two pieces of information: the normalized Type and the Value associated with the Type. These both also have a confidence score. For more information, see [Identity Documentation Response Objects](identitydocumentfields.md). For more information regarding the text detection returned by Analyze ID, see [Text Detection and Document Analysis Response Objects](how-it-works-document-layout.md)

 You can use synchronous operations to analyze a driver's license or passport. To analyze these documents, you use the AnalyzeID operation and pass an identity document to it. `AnalyzeID` returns the entire set of results. For more information, see [Analyzing Identity Documentation with Amazon Textract](analyzing-document-identity.md). 

**Note**  
 Some identity documents, such as driver's licenses, have two sides. You can pass the front and back images of driver licenses as separate images within the same Analyze ID API request. 

# Analyzing Lending Documents


Analyze Lending is a document processing API for mortgage documents. With Analyze Lending, you can automatically extract, classify, and validate information in mortgage-related documents. Analyze Lending receives a loan document and then splits it into pages, classifying them according to the type of document. The document pages are then automatically routed to Amazon Textract text processing operations for accurate data extraction and analysis.

[StartLendingAnalysis](API_StartLendingAnalysis.md) initiates the classification and analysis of a packet of lending documents. StartLendingAnalysis operates on a document file located in an Amazon S3 bucket.

After processing, you can retrieve the results by using [GetLendingAnalysis](API_GetLendingAnalysis.md) while a summary can be retrieved with [GetLendingAnalysisSummary](API_GetLendingAnalysisSummary.md). Note that Analyze Lending document analysis is for asynchronous processing only.

For a sample of the output for the GetLendingAnalysis operation, see the following. The return includes information about the document classification type for a page, the page number, and the fields extracted by Analyze Lending: 

```
 {
    "DocumentMetadata": {
        "Pages": 1
    },
    "JobStatus": "SUCCEEDED",
    "Results": [
        {
            "Page": 1,
            "PageClassification": {
                "PageType": [
                    {
                        "Value": "1005",
                        "Confidence": 99.99947357177734
                    }
                ],
                "PageNumber": [
                    {
                        "Value": "undetected",
                        "Confidence": 100.0
                    }
                ]
            },
            "Extractions": [
                {
                    "LendingDocument": {
                        "LendingFields": [
                            {
                                "Type": "OVERTIME_CONTINUANCE_LIKELY",
                                "ValueDetections": [
                                    {
                                        "Text": "Yes",
                                        "Geometry": {
                                            "BoundingBox": {
                                                "Width": 0.019448408856987953,
                                                "Height": 0.007367494981735945,
                                                "Left": 0.8211431503295898,
                                                "Top": 0.485835462808609
                                            },
                                            "Polygon": [
                                                {
                                                    "X": 0.8211431503295898,
                                                    "Y": 0.485835462808609
                                                },
                                                {
                                                    "X": 0.8405909538269043,
                                                    "Y": 0.4858577847480774
                                                },
                                                {
                                                    "X": 0.840591549873352,
                                                    "Y": 0.49320295453071594
                                                },
                                                {
                                                    "X": 0.8211436867713928,
                                                    "Y": 0.4931805729866028
                                                }
                                            ]
                                        },
                                        "Confidence": 95.0
                                    }
                                ]
                            },
                            {
                                "Type": "CURRENT_GROSS_PAY_WEEKLY",
                                "KeyDetection": {
                                    "Text": "Weekly",
                                    "Geometry": {
                                        "BoundingBox": {
                                            "Width": 0.039741966873407364,
                                            "Height": 0.009058262221515179,
                                            "Left": 0.17564243078231812,
                                            "Top": 0.5004485845565796
                                        },
                                        "Polygon": [
                                            {
                                                "X": 0.17564436793327332,
                                                "Y": 0.5004485845565796
                                            },
                                            {
                                                "X": 0.21538439393043518,
                                                "Y": 0.5004944205284119
                                            },
                                            {
                                                "X": 0.2153826206922531,
                                                "Y": 0.5095068216323853
                                            },
                                            {
                                                "X": 0.17564243078231812,
                                                "Y": 0.5094608664512634
                                            }
                                        ]
                                    },
                                    "Confidence": 99.98104858398438
                                },
                                "ValueDetections": [
                                    {
                                        "SelectionStatus": "NOT_SELECTED",
                                        "Geometry": {
                                            "BoundingBox": {
                                                "Width": 0.010146399028599262,
                                                "Height": 0.00771764200180769,
                                                "Left": 0.1600940227508545,
                                                "Top": 0.5003445148468018
                                            },
                                            "Polygon": [
                                                {
                                                    "X": 0.16009573638439178,
                                                    "Y": 0.5003445148468018
                                                },
                                                {
                                                    "X": 0.17024043202400208,
                                                    "Y": 0.5003561973571777
                                                },
                                                {
                                                    "X": 0.17023874819278717,
                                                    "Y": 0.5080621242523193
                                                },
                                                {
                                                    "X": 0.1600940227508545,
                                                    "Y": 0.5080504417419434
                                                }
                                            ]
                                        },
                                        "Confidence": 99.88064575195312
                                    }
                                ]
                            }
                        ],
                        "SignatureDetections": [
                            {
                                "Confidence": 98.95830535888672,
                                "Geometry": {
                                    "BoundingBox": {
                                        "Width": 0.1505945473909378,
                                        "Height": 0.019163239747285843,
                                        "Left": 0.1145595833659172,
                                        "Top": 0.8886017799377441
                                    },
                                    "Polygon": [
                                        {
                                            "X": 0.11456418037414551,
                                            "Y": 0.8886017799377441
                                        },
                                        {
                                            "X": 0.2651541233062744,
                                            "Y": 0.8887989521026611
                                        },
                                        {
                                            "X": 0.2651508152484894,
                                            "Y": 0.9077650308609009
                                        },
                                        {
                                            "X": 0.1145595833659172,
                                            "Y": 0.9075667262077332
                                        }
                                    ]
                                }
                            }
                        ]
                    }
                }
            ]
        }
    ],
    "AnalyzeLendingModelVersion": "1.0"
}
```

For a sample of the output for a GetLendingAnalysisSummary operation, see the following. The return includes information about all the documents grouped by the same document type, which are stored in DocumentGroups:

```
{
    "DocumentMetadata": {
        "Pages": 1
    },
    "JobStatus": "SUCCEEDED",
    "Summary": {
        "DocumentGroups": [
            {
                "Type": "1005",
                "SplitDocuments": [
                    {
                        "Index": 1,
                        "Pages": [
                            1
                        ]
                    }
                ],
                "DetectedSignatures": [
                    {
                        "Page": 1
                    }
                ],
                "UndetectedSignatures": []
            }
        ],
        "UndetectedDocumentTypes": [
            "1040_SCHEDULE_C",
            "1099_INT",
            "1099_SSA",
            "DEMOGRAPHIC_ADDENDUM",
            "1065",
            "1040",
            "1120_S",
            "IDENTITY_DOCUMENT",
            "SSA_89",
            "MORTGAGE_STATEMENT",
            "1099_MISC",
            "CHECKS",
            "HOA_STATEMENT",
            "INVESTMENT_STATEMENT",
            "1120",
            "1003",
            "VBA_26_0551",
            "1099_R",
            "PAYSLIPS",
            "1008",
            "W_2",
            "1099_NEC",
            "BANK_STATEMENT",
            "1040_SCHEDULE_E",
            "UTILITY_BILLS",
            "W_9",
            "UNCLASSIFIED",
            "HUD_92900_B",
            "PAYOFF_STATEMENT",
            "1099_G",
            "CREDIT_CARD_STATEMENT",
            "INVOICES",
            "RECEIPTS",
            "1040_SCHEDULE_D",
            "1099_DIV"
        ]
    },
    "AnalyzeLendingModelVersion": "1.0"
}
```

For descriptions of the response objects, see [Analyze Lending Response Objects](lending-response-objects.md).

Consult the file included with the assets folder for a list of all possible recognized classes. 

# Customizing Outputs


With Amazon Textract document analysis, you can customize the model output through adapters trained on your own documents. Adapters are components that plug in to the Amazon Textract pre-trained deep learning model, customizing its output for your business specific documents. You create an adapter for your specific use case by annotating/labeling your sample documents and training the adapter on the annotated samples. When using this process, the Adapter used is similar to the use of queries, and as such this feature is referred to as Custom Queries

After you create an adapter, Amazon Textract provides you with an AdapterId. You can have multiple adapter versions within a single adapter. You can provide the AdapterId, along with an AdapterVersion, to an operation to specify that you want to use the adapter that you created. For example, you provide the two parameters to the [AnalyzeDocument](API_AnalyzeDocument.md) API for synchronous document analysis, or the [StartDocumentAnalysis](API_StartDocumentAnalysis.md) operation for asynchronous analysis. Providing the AdapterId as part of the request will automatically integrate the adapter into the analysis process and use it to enhance predictions for your documents. This way, you can leverage the capabilities of AnalyzeDocument while customizing the model to fit your own use case. 

For more information on creating and using adapters, see [Customizing your Queries Responses](textract-using-adapters.md). For a tutorial on how to create, train, and use adapters with the AWS Management Console, see [Custom Queries tutorial](textract-adapters-tutorial.md).