

# Post-call analytics
<a name="call-analytics-batch"></a>

Call Analytics provides post-call analyses, which are useful for monitoring customer service trends. 

Post-call transcriptions offer the following insights:
+ [Call characteristics](#tca-characteristics-batch), including talk time, non-talk time, speaker loudness, interruptions, talk speed, issues, outcomes, and action items
+  [Generative call summarization](#tca-summarization-batch), which creates a concise summary of the entire call 
+ [Custom categorization](#tca-categorization-batch) with rules that you can use to hone in on specific keywords and criteria
+ [PII redaction](#tca-pii-redact-batch) of your text transcript and your audio file
+ [Speaker sentiment](#tca-sentiment-batch) for each caller at various points in a call

## Post-call insights
<a name="call-analytics-insights-batch"></a>

This section details the insights available for post-call analytics transcriptions.

### Call characteristics
<a name="tca-characteristics-batch"></a>

The call characteristics feature measures the quality of agent-customer interactions using these criteria:
+ **Interruption**: Measures if and when one participant cuts off the other participant mid-sentence. Frequent interruptions may be associated with rudeness or anger, and could correlate to negative sentiment for one or both participants.
+ **Loudness**: Measures the volume at which each participant is speaking. Use this metric to see if the caller or the agent is speaking loudly or yelling, which is often indicative of being upset. This metric is represented as a normalized value (speech level per second of speech in a given segment) on a scale from 0 to 100, where a higher value indicates a louder voice.
+ **Non-talk time**: Measures periods of time that do not contain speech. Use this metric to see if there are long periods of silence, such as an agent keeping a customer on hold for an excessive amount of time.
+ **Talk speed**: Measures the speed at which both participants are speaking. Comprehension can be affected if one participant speaks too quickly. This metric is measured in words per minute.
+ **Talk time**: Measures the amount of time (in milliseconds) each participant spoke during the call. Use this metric to help identify if one participant is dominating the call or if the dialogue is balanced.
+ **Issues, Outcomes, and Action Items**: Identifies issues, outcomes and action items from the call transcript.

Here's an [output example](tca-output-batch.md#tca-output-characteristics-batch).

### Generative call summarization
<a name="tca-summarization-batch"></a>

 Generative call summarization creates a concise summary of the entire call, capturing key components such as reason for the call, steps taken to resolve issue, and next steps. 

 Using generative call summarization, you can: 
+ Reduce the need for manual note-taking during and after calls.
+ Improve agent efficiency as they can spend more time talking to callers waiting in queue rather than engaging in after-call work.
+ Speed up supervisor reviews as call summaries are much quicker to review than entire transcripts.

 To use generative call summarization with a post-call analytics job, see [Enabling generative call summarization](https://docs.aws.amazon.com/transcribe/latest/dg/tca-enable-summarization.html). For example output, see [Generative call summarization output example](https://docs.aws.amazon.com/transcribe/latest/dg/tca-output-batch.html#tca-output-summarization-batch). Generative call summarization is priced separately (please refer to [pricing page](https://aws.amazon.com/transcribe/pricing)). 

**Note**  
 Generative call summarization is currently available in `us-east-1` and `us-west-2`. This capability is supported with these English language dialects: Australian (`en-AU`), British (`en-GB`), Indian (`en-IN`), Irish (`en-IE`), Scottish (`en-AB`), US (`en-US`), and Welsh (`en-WL`). 

### Custom categorization
<a name="tca-categorization-batch"></a>

Use call categorization to flag keywords, phrases, sentiment, or actions within a call. Our categorization options can help you triage escalations, such as negative-sentiment calls with many interruptions, or organize calls into specific categories, such as company departments.

The criteria you can add to a category include:
+ **Non-talk time**: Periods of time when neither the customer nor the agent is talking.
+ **Interruptions**: When the customer or the agent is interrupting the other person.
+ **Customer or agent sentiment**: How the customer or the agent is feeling during a specified time period. If at least 50 percent of the conversation turns (the back-and-forth between two speakers) in a specified time period match the specified sentiment, Amazon Transcribe considers the sentiment a match.
+ **Keywords or phrases**: Matches part of the transcription based on an exact phrase. For example, if you set a filter for the phrase "I want to speak to the manager", Amazon Transcribe filters for that *exact* phrase.

You can also flag the inverse of the previous criteria (talk time, lack of interruptions, a sentiment not being present, and the lack of a specific phrase).

Here's an [output example](tca-output-batch.md#tca-output-categorization-batch).

For more information on categories or to learn how to create a new category, see [Creating categories for post-call transcriptions](tca-categories-batch.md).

### Sensitive data redaction
<a name="tca-pii-redact-batch"></a>

Sensitive data redaction replaces personally identifiable information (PII) in the text transcript and the audio file. A redacted transcript replaces the original text with `[PII]`; a redacted audio file replaces spoken personal information with silence. This parameter is useful for protecting customer information.

**Note**  
Post-call PII redaction is supported with US English (`en-US`) and US Spanish (`es-US`).

To view the list of PII that is redacted using this feature, or to learn more about redaction with Amazon Transcribe, see [Redacting or identifying personally identifiable information](pii-redaction.md).

Here is an [output example](tca-output-batch.md#tca-output-pii-redact-batch).

### Sentiment analysis
<a name="tca-sentiment-batch"></a>

Sentiment analysis estimates how the customer and agent are feeling throughout the call. This metric is represented as both a quantitative value (with a range from `5` to `-5`) and a qualitative value (`positive`, `neutral`, `mixed`, or `negative`). Quantitative values are provided per quarter and per call; qualitative values are provided per turn.

This metric can help identify if your agent is able to delight an upset customer by the time the call ends.

Sentiment analysis works out-of-the-box and thus doesn't support customization, such as model training or custom categories.

Here's an [output example](tca-output-batch.md#tca-output-sentiment-batch).

# Creating categories for post-call transcriptions
<a name="tca-categories-batch"></a>

Post-call analytics supports the creation of custom categories, enabling you to tailor your transcript analyses to best suit your specific business needs.

You can create as many categories as you like to cover a range of different scenarios. For each category you create, you must create between 1 and 20 rules. Each rule is based on one of four criteria: interruptions, keywords, non-talk time, or sentiment. For more details on using these criteria with the [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html) operation, refer to the [Rule criteria for post-call analytics categories](#tca-rules-batch) section.

If the content in your media matches all the rules you've specified in a given category, Amazon Transcribe labels your output with that category. See [call categorization output](tca-output-batch.md#tca-output-categorization-batch) for an example of a category match in JSON output.

Here are a few examples of what you can do with custom categories:
+ Isolate calls with specific characteristics, such as calls that end with a negative customer sentiment
+ Identify trends in customer issues by flagging and tracking specific sets of keywords
+ Monitor compliance, such as an agent speaking (or omitting) a specific phrase during the first few seconds of a call
+ Gain insight into customer experience by flagging calls with many agent interruptions and negative customer sentiment
+ Compare multiple categories to measure correlations, such as analyzing whether an agent using a welcome phrase correlates with positive customer sentiment

**Post-call versus real-time categories**

When creating a new category, you can specify whether you want it created as a post-call analytics category (`POST_CALL`) or as a real-time Call Analytics category (`REAL_TIME`). If you don't specify an option, your category is created as a post-call category by default. Post-call analytics category matches are available in your output upon completion of your post-call analytics transcription.

To create a new category for post-call analytics, you can use the **AWS Management Console**, **AWS CLI**, or **AWS SDKs**; see the following for examples:

## AWS Management Console
<a name="tca-category-console-batch"></a>

1. In the navigation pane, under Amazon Transcribe, choose **Amazon Transcribe Call Analytics**.

1. Choose **Call analytics categories**, which takes you to the **Call analytics categories** page. Select **Create category**.  
![\[Amazon Transcribe console screenshot: the Call Analytics 'categories' page.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-categories.png)

1. You're now on the **Create category page**. Enter a name for your category, then choose 'Batch call analytics' in the **Category type** dropdown menu.  
![\[Amazon Transcribe console screenshot: the 'category settings' panel.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-categories-type.png)

1. You can choose a template to create your category or you can make one from scratch.

   If using a template: select **Use a template (recommended)**, choose the template you want, then select **Create category**.  
![\[Amazon Transcribe console screenshot: the 'category settings' panel showing optional templates.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-categories-settings-batch.png)

1. If creating a custom category: select **Create from scratch**.  
![\[Amazon Transcribe console screenshot: the 'create category' page showing 'rules' pane.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-categories-custom.png)

1. Add rules to your category using the dropdown menu. You can add up to 20 rules per category.  
![\[Amazon Transcribe console screenshot: the 'rules' pane with list of rule types.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-categories-custom-rules1.png)

1. Here's an example of a category with two rules: an agent who interrupts a customer for more than 15 seconds during the call and a negative sentiment felt by the customer or the agent in the last two minutes of the call.  
![\[Amazon Transcribe console screenshot: the 'rules' pane with two example rules.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-categories-custom-rules2.png)

1. When you're finished adding rules to your category, choose **Create category**.

## AWS CLI
<a name="tca-category-cli-batch"></a>

This example uses the [create-call-analytics-category](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/transcribe/create-call-analytics-category.html) command. For more information, see [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html), [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CategoryProperties.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CategoryProperties.html), and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_Rule.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_Rule.html).

The following example creates a category with the rules:
+ The customer was interrupted in the first 60,000 milliseconds. The duration of these interruptions lasted at least 10,000 milliseconds.
+ There was a period of silence that lasted at least 20,000 milliseconds between 10% into the call and 80% into the call.
+ The agent had a negative sentiment at some point in the call.
+ The words "welcome" or "hello" were not used in the first 10,000 milliseconds of the call.

This example uses the [create-call-analytics-category](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/transcribe/create-call-analytics-category.html) command, and a request body that adds several rules to your category.

```
aws transcribe create-call-analytics-category \
--cli-input-json file://filepath/my-first-analytics-category.json
```

The file *my-first-analytics-category.json* contains the following request body.

```
{
  "CategoryName": "my-new-category",
  "InputType": "POST_CALL",
  "Rules": [
        {
            "InterruptionFilter": {
                "AbsoluteTimeRange": {
                    "First": 60000
                },
                "Negate": false,
                "ParticipantRole": "CUSTOMER",
                "Threshold": 10000
            }
        },
        {
            "NonTalkTimeFilter": {
                "Negate": false,
                "RelativeTimeRange": {
                    "EndPercentage": 80,
                    "StartPercentage": 10
                },
                "Threshold": 20000
            }
        },
        {
            "SentimentFilter": {
                "ParticipantRole": "AGENT",
                "Sentiments": [
                    "NEGATIVE"                    
                ]
            }
        },
        {
            "TranscriptFilter": {
                "Negate": true,
                "AbsoluteTimeRange": {
                    "First": 10000
                },
                "Targets": [
                    "welcome",
                    "hello"
                ],
                "TranscriptFilterType": "EXACT"
            }
        }
    ]
}
```

## AWS SDK for Python (Boto3)
<a name="tca-category-python-batch"></a>

This example uses the AWS SDK for Python (Boto3) to create a category using the `CategoryName` and `Rules` arguments for the [create\$1call\$1analytics\$1category](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.create_call_analytics_category) method. For more information, see [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html), [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CategoryProperties.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CategoryProperties.html), and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_Rule.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_Rule.html).

For additional examples using the AWS SDKs, including feature-specific, scenario, and cross-service examples, refer to the [Code examples for Amazon Transcribe using AWS SDKs](service_code_examples.md) chapter.

The following example creates a category with the rules:
+ The customer was interrupted in the first 60,000 milliseconds. The duration of these interruptions lasted at least 10,000 milliseconds.
+ There was a period of silence that lasted at least 20,000 milliseconds between 10% into the call and 80% into the call.
+ The agent had a negative sentiment at some point in the call.
+ The words "welcome" or "hello" were not used in the first 10,000 milliseconds of the call.

```
from __future__ import print_function
import time
import boto3
transcribe = boto3.client('transcribe', 'us-west-2')
category_name = "my-new-category"
transcribe.create_call_analytics_category(
    CategoryName = category_name,
    InputType = POST_CALL,
    Rules = [
        {
            'InterruptionFilter': {
                'AbsoluteTimeRange': {
                    'First': 60000
                },
                'Negate': False,
                'ParticipantRole': 'CUSTOMER',
                'Threshold': 10000
            }
        },
        {
            'NonTalkTimeFilter': {
                'Negate': False,
                'RelativeTimeRange': {
                    'EndPercentage': 80,
                    'StartPercentage': 10
                },
                'Threshold': 20000
            }
        },
        {
            'SentimentFilter': {
                'ParticipantRole': 'AGENT',
                'Sentiments': [
                    'NEGATIVE'                    
                ]
            }
        },
        {
            'TranscriptFilter': {
                'Negate': True,
                'AbsoluteTimeRange': {
                    'First': 10000
                },
                'Targets': [
                    'welcome',
                    'hello'
                ],
                'TranscriptFilterType': 'EXACT'
            }
        }
    ]
    
)

result = transcribe.get_call_analytics_category(CategoryName = category_name)    
print(result)
```

## Rule criteria for post-call analytics categories
<a name="tca-rules-batch"></a>

This section outlines the types of custom `POST_CALL` rules that you can create using the [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html) API operation.

### Interruption match
<a name="tca-rules-interruptions-batch"></a>

Rules using interruptions ([https://docs.aws.amazon.com/transcribe/latest/APIReference/API_InterruptionFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_InterruptionFilter.html) data type) are designed to match:
+ Instances where an agent interrupts a customer
+ Instances where a customer interrupts an agent
+ Any participant interrupting the other
+ A lack of interruptions

Here's an example of the parameters available with [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_InterruptionFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_InterruptionFilter.html):

```
"InterruptionFilter": { 
    "AbsoluteTimeRange": { 
       Specify the time frame, in milliseconds, when the match should occur
    },
    "RelativeTimeRange": { 
       Specify the time frame, in percentage, when the match should occur
    },
    "Negate": Specify if you want to match the presence or absence of interruptions,
    "ParticipantRole": Specify if you want to match speech from the agent, the customer, or both,    
    "Threshold": Specify a threshold for the amount of time, in seconds, interruptions occurred during the call
},
```

Refer to [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html) and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_InterruptionFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_InterruptionFilter.html) for more information on these parameters and the valid values associated with each.

### Keyword match
<a name="tca-rules-keywords-batch"></a>

Rules using keywords ([https://docs.aws.amazon.com/transcribe/latest/APIReference/API_TranscriptFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_TranscriptFilter.html) data type) are designed to match:
+ Custom words or phrases spoken by the agent, the customer, or both
+ Custom words or phrases **not** spoken by the agent, the customer, or both
+ Custom words or phrases that occur in a specific time frame

Here's an example of the parameters available with [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_TranscriptFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_TranscriptFilter.html):

```
"TranscriptFilter": { 
    "AbsoluteTimeRange": { 
       Specify the time frame, in milliseconds, when the match should occur
    },
    "RelativeTimeRange": { 
       Specify the time frame, in percentage, when the match should occur
    },
    "Negate": Specify if you want to match the presence or absence of your custom keywords,
    "ParticipantRole": Specify if you want to match speech from the agent, the customer, or both,
    "Targets": [ The custom words and phrases you want to match ],
    "TranscriptFilterType": Use this parameter to specify an exact match for the specified targets
}
```

Refer to [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html) and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_TranscriptFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_TranscriptFilter.html) for more information on these parameters and the valid values associated with each.

### Non-talk time match
<a name="tca-rules-nontalktime-batch"></a>

Rules using non-talk time ([https://docs.aws.amazon.com/transcribe/latest/APIReference/API_NonTalkTimeFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_NonTalkTimeFilter.html) data type) are designed to match:
+ The presence of silence at specified periods throughout the call
+ The presence of speech at specified periods throughout the call

Here's an example of the parameters available with [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_NonTalkTimeFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_NonTalkTimeFilter.html):

```
"NonTalkTimeFilter": { 
    "AbsoluteTimeRange": { 
 Specify the time frame, in milliseconds, when the match should occur
 },
    "RelativeTimeRange": { 
 Specify the time frame, in percentage, when the match should occur
 },
    "Negate": Specify if you want to match the presence or absence of speech,      
    "Threshold": Specify a threshold for the amount of time, in seconds, silence (or speech) occurred during the call
},
```

Refer to [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html) and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_NonTalkTimeFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_NonTalkTimeFilter.html) for more information on these parameters and the valid values associated with each.

### Sentiment match
<a name="tca-rules-sentiment-batch"></a>

Rules using sentiment ([https://docs.aws.amazon.com/transcribe/latest/APIReference/API_SentimentFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_SentimentFilter.html) data type) are designed to match:
+ The presence or absence of a positive sentiment expressed by the customer, agent, or both at specified points in the call
+ The presence or absence of a negative sentiment expressed by the customer, agent, or both at specified points in the call
+ The presence or absence of a neutral sentiment expressed by the customer, agent, or both at specified points in the call
+ The presence or absence of a mixed sentiment expressed by the customer, agent, or both at specified points in the call

Here's an example of the parameters available with [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_SentimentFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_SentimentFilter.html):

```
"SentimentFilter": { 
    "AbsoluteTimeRange": { 
    Specify the time frame, in milliseconds, when the match should occur
    },
    "RelativeTimeRange": { 
    Specify the time frame, in percentage, when the match should occur
    },
    "Negate": Specify if you want to match the presence or absence of your chosen sentiment,
    "ParticipantRole": Specify if you want to match speech from the agent, the customer, or both,    
    "Sentiments": [ The sentiments you want to match ]
},
```

Refer to [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_CreateCallAnalyticsCategory.html) and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_SentimentFilter.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_SentimentFilter.html) for more information on these parameters and the valid values associated with each.

# Starting a post-call analytics transcription
<a name="tca-start-batch"></a>

Before starting a post-call analytics transcription, you must create all the [categories](tca-categories-batch.md) you want Amazon Transcribe to match in your audio.

**Note**  
Call Analytics transcripts can't be retroactively matched to new categories. Only the categories you create *before* starting a Call Analytics transcription can be applied to that transcription output.

If you've created one or more categories, and your audio matches all the rules within at least one of your categories, Amazon Transcribe flags your output with the matching category. If you choose not to use categories, or if your audio doesn't match the rules specified in your categories, your transcript isn't flagged.

To start a post-call analytics transcription, you can use the **AWS Management Console**, **AWS CLI**, or **AWS SDKs**; see the following for examples:

## AWS Management Console
<a name="analytics-start-console-batch"></a>

Use the following procedure to start a post-call analytics job. The calls that match all characteristics defined by a category are labeled with that category.

1. In the navigation pane, under Amazon Transcribe Call Analytics, choose **Call analytics jobs**.

1. Choose **Create job**.  
![\[Amazon Transcribe console screenshot: the 'Call Analytics jobs' page.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-start.png)

1. On the **Specify job details** page, provide information about your Call Analytics job, including the location of your input data.  
![\[Amazon Transcribe console screenshot: the 'specify job details' page.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-start-settings1.png)

   Specify the desired Amazon S3 location of your output data and which IAM role to use.  
![\[Amazon Transcribe console screenshot: the 'access permissions' panel.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-start-settings2.png)

1. Choose **Next**.

1. For **Configure job**, turn on any optional features you want to include with your Call Analytics job. If you previously created categories, they appear in the **Categories** panel and are automatically applied to your Call Analytics job.  
![\[Amazon Transcribe console screenshot: the 'configure job' page showing all custom categories.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-start-configure.png)

1. Choose **Create job**.

## AWS CLI
<a name="analytics-start-cli"></a>

This example uses the [start-call-analytics-job](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/transcribe/start-call-analytics-job.html) command and `channel-definitions` parameter. For more information, see [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartCallAnalyticsJob.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartCallAnalyticsJob.html) and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_ChannelDefinition.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_ChannelDefinition.html).

```
aws transcribe start-call-analytics-job \
--region us-west-2 \
--call-analytics-job-name my-first-call-analytics-job \
--media MediaFileUri=s3://amzn-s3-demo-bucket/my-input-files/my-media-file.flac \
--output-location s3://amzn-s3-demo-bucket/my-output-files/ \
--data-access-role-arn arn:aws:iam::111122223333:role/ExampleRole \
--channel-definitions ChannelId=0,ParticipantRole=AGENT ChannelId=1,ParticipantRole=CUSTOMER
```

Here's another example using the [start-call-analytics-job](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/transcribe/start-call-analytics-job.html) command, and a request body that enables Call Analytics for that job.

```
aws transcribe start-call-analytics-job \
--region us-west-2 \
--cli-input-json file://filepath/my-call-analytics-job.json
```

The file *my-call-analytics-job.json* contains the following request body.

```
{
      "CallAnalyticsJobName": "my-first-call-analytics-job",
      "DataAccessRoleArn": "arn:aws:iam::111122223333:role/ExampleRole",
      "Media": {
          "MediaFileUri": "s3://amzn-s3-demo-bucket/my-input-files/my-media-file.flac"
      },
      "OutputLocation": "s3://amzn-s3-demo-bucket/my-output-files/",
      "ChannelDefinitions": [
          {
              "ChannelId": 0,
              "ParticipantRole": "AGENT"
          },
          {
              "ChannelId": 1,
              "ParticipantRole": "CUSTOMER"
          }
      ]
}
```

## AWS SDK for Python (Boto3)
<a name="analytics-start-python-batch"></a>

This example uses the AWS SDK for Python (Boto3) to start a Call Analytics job using the [start\$1call\$1analytics\$1job](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.start_call_analytics_job) method. For more information, see [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartCallAnalyticsJob.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartCallAnalyticsJob.html) and [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_ChannelDefinition.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_ChannelDefinition.html).

For additional examples using the AWS SDKs, including feature-specific, scenario, and cross-service examples, refer to the [Code examples for Amazon Transcribe using AWS SDKs](service_code_examples.md) chapter.

```
from __future__ import print_function
import time
import boto3
transcribe = boto3.client('transcribe', 'us-west-2')
job_name = "my-first-call-analytics-job"
job_uri = "s3://amzn-s3-demo-bucket/my-input-files/my-media-file.flac"
output_location = "s3://amzn-s3-demo-bucket/my-output-files/"
data_access_role = "arn:aws:iam::111122223333:role/ExampleRole"
transcribe.start_call_analytics_job(
     CallAnalyticsJobName = job_name,
     Media = {
        'MediaFileUri': job_uri
     },
     DataAccessRoleArn = data_access_role,
     OutputLocation = output_location,
     ChannelDefinitions = [
        {
            'ChannelId': 0, 
            'ParticipantRole': 'AGENT'
        },
        {
            'ChannelId': 1, 
            'ParticipantRole': 'CUSTOMER'
        }
     ]
)
    
 while True:
   status = transcribe.get_call_analytics_job(CallAnalyticsJobName = job_name)
   if status['CallAnalyticsJob']['CallAnalyticsJobStatus'] in ['COMPLETED', 'FAILED']:
     break
   print("Not ready yet...")
   time.sleep(5)
 print(status)
```

# Post-call analytics output
<a name="tca-output-batch"></a>

Post-call analytics transcripts are displayed in a turn-by-turn format by segment. They include call categorization, call characteristics (loudness scores, interruptions, non-talk time, talk speed), call summarization (issues, outcomes, and action items), redaction, and sentiment. Additionally, a summary of conversation characteristics is provided at the end of the transcript.

To increase accuracy and further customize your transcripts to your use case, such as including industry-specific terms, add [custom vocabularies](custom-vocabulary.md) or [custom language models](custom-language-models.md) to your Call Analytics request. To mask, remove, or tag words that you don't want in your transcription results, such as profanity, add [vocabulary filtering](vocabulary-filtering.md). If you are unsure of the language code to be passed to the media file, you can enable [batch language identification](https://docs.aws.amazon.com/transcribe/latest/dg/lang-id-batch.html) to automatically identify the language in your media file.

The following sections show examples of JSON output at an insight level. For compiled output, see [Compiled post-call analytics output](#tca-output-batch-compiled).

## Call categorization
<a name="tca-output-categorization-batch"></a>

Here's what a category match looks like in your transcription output. This example shows that the audio from the 40040 millisecond timestamp to the 42460 millisecond timestamp is a match to the 'positive-resolution' category. In this case, the custom 'positive-resolution' category required a positive sentiment in last few seconds of speech.

```
"Categories": {
    "MatchedDetails": {
        "positive-resolution": {
            "PointsOfInterest": [
                {
                    "BeginOffsetMillis":  40040,
                    "EndOffsetMillis":  42460
                }
            ]
        }
    },
    "MatchedCategories": [
        " positive-resolution"
    ]
},
```

## Call characteristics
<a name="tca-output-characteristics-batch"></a>

Here's what call characteristics look like in your transcription output. Note that loudness scores are provided for each conversation turn, while all other characteristics are provided at the end of the transcript.

```
"LoudnessScores": [
    87.54,
    88.74,
    90.16,
    86.36,
    85.56,
    85.52,
    81.79,
    87.74,
    89.82
],
  
...  
    
"ConversationCharacteristics": {
    "NonTalkTime": {
        "Instances": [],
        "TotalTimeMillis": 0
    },
    "Interruptions": {
        "TotalCount": 2,
        "TotalTimeMillis": 10700,
        "InterruptionsByInterrupter": {
            "AGENT": [
                {
                    "BeginOffsetMillis": 26040,
                    "DurationMillis": 5510,
                    "EndOffsetMillis": 31550
                }
            ],
            "CUSTOMER": [
                {
                    "BeginOffsetMillis": 770,
                    "DurationMillis": 5190,
                    "EndOffsetMillis": 5960
                }
            ]
        }
    },
    "TotalConversationDurationMillis": 42460,
  
    ...
    
    "TalkSpeed": {
        "DetailsByParticipant": {
            "AGENT": {
                "AverageWordsPerMinute": 150
            },
            "CUSTOMER": {
                "AverageWordsPerMinute": 167
            }
        }
    },
    "TalkTime": {
        "DetailsByParticipant": {
            "AGENT": {
                "TotalTimeMillis": 32750
            },
            "CUSTOMER": {
                "TotalTimeMillis": 18010
            }
        },
        "TotalTimeMillis": 50760
    }
},
```

 **Issues, Action Items and Next Steps** 
+ In the following example, **issues** are identified as starting at character 7 and ending at character 51, which refers to this section of the text: "*I would like to cancel my recipe subscription*".

  ```
  "Content": "Well, I would like to cancel my recipe subscription.",
      
  "IssuesDetected": [
      {
          "CharacterOffsets": {
              "Begin": 7,
              "End": 51
          }
      }
  ],
  ```
+ In the following example, **outcomes** are identified as starting at character 12 and ending at character 78, which refers to this section of the text: "*I made all changes to your account and now this discount is applied*".

  ```
  "Content": "Wonderful. I made all changes to your account and now this discount is applied, please check.",
  
  "OutcomesDetected": [
      {
          "CharacterOffsets": {
              "Begin": 12,
              "End": 78
          }
      }
  ],
  ```
+ In the following example, **action items** are identified as starting at character 0 and ending at character 103, which refers to this section of the text: "*I will send an email with all the details to you today, and I will call you back next week to follow up*".

  ```
  "Content": "I will send an email with all the details to you today, and I will call you back next week to follow up. Have a wonderful evening.",
      
  "ActionItemsDetected": [
      {
          "CharacterOffsets": {
              "Begin": 0,
              "End": 103
          }
      }
  ],
  ```

## Generative call summarization
<a name="tca-output-summarization-batch"></a>

Here's what generative call summarization looks like in your transcription output:

```
"ContactSummary": {
    "AutoGenerated": {
        "OverallSummary": {
            "Content": "A customer wanted to check to see if we had a bag allowance. We told them that we didn't have it, but we could add the bag from Canada to Calgary and then do the one coming back as well."
        }
    }
}
```

The analytics job will complete without summary generation in the following cases:
+ Insufficient conversation content: The conversation must include at least one turn from both the agent and the customer. When there is insufficient conversation content, the service will return the error code INSUFFICIENT\$1CONVERSATION\$1CONTENT.
+ Safety guardrails: The conversation must meet safety guardrails in place to ensure appropriate summary is generated. When these guardrails are not met, the service will return the error code FAILED\$1SAFETY\$1GUIDELINES.

The error code can be found in `Skipped` section within `AnalyticsJobDetails` in the output. You may also find the error reason in `CallAnalyticsJobDetails` in the [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_GetCallAnalyticsJob.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_GetCallAnalyticsJob.html) API Response.

 **Sample Error Output** 

```
{
    "JobStatus": "COMPLETED",
    "AnalyticsJobDetails": {
        "Skipped": [
            {
                "Feature": "GENERATIVE_SUMMARIZATION",
                "ReasonCode": "INSUFFICIENT_CONVERSATION_CONTENT",
                "Message": "The conversation needs to have at least one turn from both the participants to generate summary"
            }
        ]
    },
    "LanguageCode": "en-US",
    "AccountId": "***************",
    "JobName": "Test2-copy",
    ...
}
```

## Sentiment analysis
<a name="tca-output-sentiment-batch"></a>

Here is what sentiment analysis looks like in your transcription output.
+ Qualitative turn-by-turn sentiment values:

  ```
  "Content": "That's very sad to hear. Can I offer you a 50% discount to have you stay with us?",
      
  ...
      
  "BeginOffsetMillis": 12180,
  "EndOffsetMillis": 16960,
  "Sentiment": "NEGATIVE",
  "ParticipantRole": "AGENT"
              
  ...
              
  "Content": "That is a very generous offer. And I accept.",
  
  ...
  
  "BeginOffsetMillis": 17140,
  "EndOffsetMillis": 19860,
  "Sentiment": "POSITIVE",
  "ParticipantRole": "CUSTOMER"
  ```
+ Quantitative sentiment values for the entire call:

  ```
  "Sentiment": {
      "OverallSentiment": {
          "AGENT": 2.5,
          "CUSTOMER": 2.1
      },
  ```
+ Quantitative sentiment values per participant and per call quarter:

  ```
  "SentimentByPeriod": {
      "QUARTER": {
          "AGENT": [
              {
                  "Score": 0.0,
                  "BeginOffsetMillis": 0,
                  "EndOffsetMillis": 9862
              },
              {
                  "Score": -5.0,
                  "BeginOffsetMillis": 9862,
                  "EndOffsetMillis": 19725
              },
              {
                  "Score": 5.0,
                  "BeginOffsetMillis": 19725,
                  "EndOffsetMillis": 29587
              },
              {
                  "Score": 5.0,
                  "BeginOffsetMillis": 29587,
                  "EndOffsetMillis": 39450
              }
          ],
          "CUSTOMER": [
              {
                  "Score": -2.5,
                  "BeginOffsetMillis": 0,
                  "EndOffsetMillis": 10615
              },
              {
                  "Score": 5.0,
                  "BeginOffsetMillis": 10615,
                  "EndOffsetMillis": 21230
              },
              {
                  "Score": 2.5,
                  "BeginOffsetMillis": 21230,
                  "EndOffsetMillis": 31845
              },
              {
                  "Score": 5.0,
                  "BeginOffsetMillis": 31845,
                  "EndOffsetMillis": 42460
              }
          ]
      }
  }
  ```

## PII redaction
<a name="tca-output-pii-redact-batch"></a>

Here is what PII redaction looks like in your transcription output.

```
"Content": "[PII], my name is [PII], how can I help?",
"Redaction": [{
    "Confidence": "0.9998",
    "Type": "NAME",
    "Category": "PII"
}]
```

For more information, refer to [Redacting PII in your batch job](https://docs.aws.amazon.com/transcribe/latest/dg/pii-redaction-batch.html).

## Language identification
<a name="tca-output-language-id-batch"></a>

Here is what Language Identification looks like in your transcription output if the feature is enabled.

```
"LanguageIdentification": [{
  "Code": "en-US",
  "Score": "0.8299"
}, {
  "Code": "en-NZ",
  "Score": "0.0728"
}, {
  "Code": "zh-TW",
  "Score": "0.0695"
}, {
  "Code": "th-TH",
  "Score": "0.0156"
}, {
  "Code": "en-ZA",
  "Score": "0.0121"
}]
```

In the above output example, Language Identification will populate the language codes with confidence scores. The result with the highest score will be selected as the language code for transcription. For mode details refer to [Identifying the dominant languages in your media](https://docs.aws.amazon.com/transcribe/latest/dg/lang-id.html).

## Compiled post-call analytics output
<a name="tca-output-batch-compiled"></a>

For brevity, some content is replaced with ellipses in the following transcription output.

This sample includes optional feature - Generative call summarization.

```
{
    "JobStatus": "COMPLETED",
    "LanguageCode": "en-US",
    "Transcript": [
        {
            "LoudnessScores": [
                78.63,
                78.37,
                77.98,
                74.18
            ],
            "Content": "[PII], my name is [PII], how can I help?",
            
            ...
     
             "Content": "Well, I would like to cancel my recipe subscription.",
             "IssuesDetected": [
                 {
                     "CharacterOffsets": {
                         "Begin": 7,
                         "End": 51
                     }
                 }
             ],
            
            ...
     
            "Content": "That's very sad to hear. Can I offer you a 50% discount to have you stay with us?",
            "Items": [
            ...
             ],
            "Id": "649afe93-1e59-4ae9-a3ba-a0a613868f5d",
            "BeginOffsetMillis": 12180,
            "EndOffsetMillis": 16960,
            "Sentiment": "NEGATIVE",
            "ParticipantRole": "AGENT"
        },
        {     
            "LoudnessScores": [
                    80.22,
                    79.48,
                    82.81
            ],
            "Content": "That is a very generous offer. And I accept.",
            "Items": [
            ...
            ],
            "Id": "f9266cba-34df-4ca8-9cea-4f62a52a7981",
            "BeginOffsetMillis": 17140,
            "EndOffsetMillis": 19860,
            "Sentiment": "POSITIVE",
            "ParticipantRole": "CUSTOMER"
        },
        {
     
     ...
     
            "Content": "Wonderful. I made all changes to your account and now this discount is applied, please check.",
            "OutcomesDetected": [
                {
                    "CharacterOffsets": {
                        "Begin": 12,
                        "End": 78
                    }
                }
            ],
            
            ...
            
            "Content": "I will send an email with all the details to you today, and I will call you back next week to follow up. Have a wonderful evening.",
            "Items": [
            ...   
            ],
            "Id": "78cd0923-cafd-44a5-a66e-09515796572f",
            "BeginOffsetMillis": 31800,
            "EndOffsetMillis": 39450,
            "Sentiment": "POSITIVE",
            "ParticipantRole": "AGENT"
        },
        {
           "LoudnessScores": [
               78.54,
               68.76,
               67.76
           ],
           "Content": "Thank you very much, sir. Goodbye.",
           "Items": [
           ...     
           ],
           "Id": "5c5e6be0-8349-4767-8447-986f995af7c3",
           "BeginOffsetMillis": 40040,
           "EndOffsetMillis": 42460,
           "Sentiment": "POSITIVE",
           "ParticipantRole": "CUSTOMER"
       }
   ],
   
   ...
     
   "Categories": {
        "MatchedDetails": {
            "positive-resolution": {
                "PointsOfInterest": [
                    {
                        "BeginOffsetMillis": 40040,
                        "EndOffsetMillis": 42460
                    }
                ]
            }
        },
        "MatchedCategories": [
            "positive-resolution"
        ]
    },  
 
    ...
    
    "ConversationCharacteristics": {
        "NonTalkTime": {
            "Instances": [],
            "TotalTimeMillis": 0
        },
        "Interruptions": {
            "TotalCount": 2,
            "TotalTimeMillis": 10700,
            "InterruptionsByInterrupter": {
                "AGENT": [
                    {
                        "BeginOffsetMillis": 26040,
                        "DurationMillis": 5510,
                        "EndOffsetMillis": 31550
                    }
                ],
                "CUSTOMER": [
                    {
                        "BeginOffsetMillis": 770,
                        "DurationMillis": 5190,
                        "EndOffsetMillis": 5960
                    }
                ]
            }
        },
        "TotalConversationDurationMillis": 42460,
        "Sentiment": {
            "OverallSentiment": {
                "AGENT": 2.5,
                "CUSTOMER": 2.1
            },
            "SentimentByPeriod": {
                "QUARTER": {
                    "AGENT": [
                        {
                            "Score": 0.0,
                            "BeginOffsetMillis": 0,
                            "EndOffsetMillis": 9862
                        },
                        {
                            "Score": -5.0,
                            "BeginOffsetMillis": 9862,
                            "EndOffsetMillis": 19725
                        },
                        {
                            "Score": 5.0,
                            "BeginOffsetMillis": 19725,
                            "EndOffsetMillis": 29587
                        },
                        {
                            "Score": 5.0,
                            "BeginOffsetMillis": 29587,
                            "EndOffsetMillis": 39450
                        }
                    ],
                    "CUSTOMER": [
                        {
                            "Score": -2.5,
                            "BeginOffsetMillis": 0,
                            "EndOffsetMillis": 10615
                        },
                        {
                            "Score": 5.0,
                            "BeginOffsetMillis": 10615,
                            "EndOffsetMillis": 21230
                        },
                        {
                            "Score": 2.5,
                            "BeginOffsetMillis": 21230,
                            "EndOffsetMillis": 31845
                        },
                        {
                            "Score": 5.0,
                            "BeginOffsetMillis": 31845,
                            "EndOffsetMillis": 42460
                        }
                    ]
                }
            }
        },
        "TalkSpeed": {
            "DetailsByParticipant": {
                "AGENT": {
                    "AverageWordsPerMinute": 150
                },
                "CUSTOMER": {
                    "AverageWordsPerMinute": 167
                }
            }
        },
        "TalkTime": {
            "DetailsByParticipant": {
                "AGENT": {
                    "TotalTimeMillis": 32750
                },
                "CUSTOMER": {
                    "TotalTimeMillis": 18010
                }
            },
            "TotalTimeMillis": 50760
        },
        "ContactSummary": { // Optional feature - Generative call summarization
            "AutoGenerated": {
                "OverallSummary": {
                    "Content": "The customer initially wanted to cancel but the agent convinced them to stay by offering a 50% discount, which the customer accepted after reconsidering cancelling given the significant savings. The agent ensured the discount was applied and said they would follow up to ensure the customer remained happy with the revised subscription."
                }
            }
        }
    },
    "AnalyticsJobDetails": {
        "Skipped": []
    },
    ...
}
```

# Enabling generative call summarization
<a name="tca-enable-summarization"></a>

**Note**  
 **Powered by Amazon Bedrock:** AWS implements [automated abuse detection](https://docs.aws.amazon.com//bedrock/latest/userguide/abuse-detection.html). Because post-contact summarization powered by generative AI is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI). 

To use generative call summarization with a post call analytics job, see the following for examples:

## AWS Management Console
<a name="analytics-summarization-console"></a>

In the Summarization panel, enable Generative call summarization to receive summary in the output.

![\[Amazon Transcribe console screenshot: the 'Call Analytics jobs' page.\]](http://docs.aws.amazon.com/transcribe/latest/dg/images/analytics-summarization.png)


## AWS CLI
<a name="analytics-summarization-cli"></a>

This example uses the [start-call-analytics-job](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/transcribe/start-call-analytics-job.html) command and `Settings` parameter with the `Summarization` sub-parameters. For more information, see [https://docs.aws.amazon.com//transcribe/latest/APIReference/API_StartCallAnalyticsJob.html](https://docs.aws.amazon.com//transcribe/latest/APIReference/API_StartCallAnalyticsJob.html). 

```
aws transcribe start-call-analytics-job \
--region us-west-2 \
--call-analytics-job-name my-first-call-analytics-job \
--media MediaFileUri=s3://amzn-s3-demo-bucket/my-input-files/my-media-file.flac \
--output-location s3://amzn-s3-demo-bucket/my-output-files/ \
--data-access-role-arn arn:aws:iam::111122223333:role/ExampleRole \
--channel-definitions ChannelId=0,ParticipantRole=AGENT ChannelId=1,ParticipantRole=CUSTOMER
--settings '{"Summarization":{"GenerateAbstractiveSummary":true}}'
```

Here's another example using the [start-call-analytics-job](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/transcribe/start-call-analytics-job.html) command, and a request body that enables summarization for that job.

```
aws transcribe start-call-analytics-job \
--region us-west-2 \
--cli-input-json file://filepath/my-call-analytics-job.json
```

The file *my-call-analytics-job.json* contains the following request body.

```
{
  "CallAnalyticsJobName": "my-first-call-analytics-job",
  "DataAccessRoleArn": "arn:aws:iam::111122223333:role/ExampleRole",
  "Media": {
    "MediaFileUri": "s3://amzn-s3-demo-bucket/my-input-files/my-media-file.flac"
  },
  "OutputLocation": "s3://amzn-s3-demo-bucket/my-output-files/",
  "ChannelDefinitions": [
    {
      "ChannelId": 0,
      "ParticipantRole": "AGENT"
    },
    {
      "ChannelId": 1,
      "ParticipantRole": "CUSTOMER"
    }
  ],
  "Settings": {
    "Summarization":{
      "GenerateAbstractiveSummary": true
    }
  }
}
```

## AWS SDK for Python (Boto3)
<a name="analytics-summarization-python"></a>

This example uses the AWS SDK for Python (Boto3) to start a Call Analytics with summarization enabled using the [start\$1call\$1analytics\$1job](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.start_call_analytics_job) method. For more information, see [https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartCallAnalyticsJob.html](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartCallAnalyticsJob.html).

For additional examples using the AWS SDKs, including feature-specific, scenario, and cross-service examples, refer to the [Code examples for Amazon Transcribe using AWS SDKs](service_code_examples.md) chapter.

```
from __future__ import print_function
from __future__ import print_function
import time
import boto3
transcribe = boto3.client('transcribe', 'us-west-2')
job_name = "my-first-call-analytics-job"
job_uri = "s3://amzn-s3-demo-bucket/my-input-files/my-media-file.flac"
output_location = "s3://amzn-s3-demo-bucket/my-output-files/"
data_access_role = "arn:aws:iam::111122223333:role/ExampleRole"
transcribe.start_call_analytics_job(
  CallAnalyticsJobName = job_name,
  Media = {
    'MediaFileUri': job_uri
  },
  DataAccessRoleArn = data_access_role,
  OutputLocation = output_location,
  ChannelDefinitions = [
    {
      'ChannelId': 0, 
      'ParticipantRole': 'AGENT'
    },
    {
      'ChannelId': 1, 
      'ParticipantRole': 'CUSTOMER'
    }
  ],
  Settings = {
    "Summarization":
      {
        "GenerateAbstractiveSummary": true
      }
  }
)
    
while True:
  status = transcribe.get_call_analytics_job(CallAnalyticsJobName = job_name)
  if status['CallAnalyticsJob']['CallAnalyticsJobStatus'] in ['COMPLETED', 'FAILED']:
    break
  print("Not ready yet...")
  time.sleep(5)
print(status)
```