

# Evaluate agent and self-service interaction performance in Amazon Connect
<a name="evaluations"></a>

**Tip**  
**New user?** Check out the [Amazon Connect Agent Evaluation Forms Workshop](https://catalog.workshops.aws/amazon-connect-evaluation-forms/en-US). This online course guides you through creating a working example of an evaluation form.  
**IT administrators**: To enable Amazon Connect evaluation capabilities, go to the Amazon Connect console, choose your instance alias, choose **Data storage**, **Content evaluations**, **Edit**. You'll be prompted to create or choose an S3 bucket. After the bucket is created, you can store evaluations and export them.

Amazon Connect performance evaluations enables you to define custom performance evaluation criteria to assess, monitor and improve how agents and automated systems (bots, AI agents) interact with customers and resolve issues. You can then monitor performance by reviewing aggregated insights in dashboards, and drill-down into individual contacts where you can see evaluations alongside recordings, transcript, conversation summaries and analytics in a single view. With integrated coaching, you can provide feedback to agents highlighting their strengths and opportunities to improve. 

You can perform manual evaluations for all contact types (voice, chat, email, and task). You can perform automated interactions for voice and chat contacts analyzed by Amazon Connect conversational analytics. You can perform automated evaluations of both agent interactions and automated interactions (handled by bots or AI agents). For more details on automated evaluations, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

To perform manual evaluations, you can search for a contact, choose the appropriate evaluation form, review the contact audio, screen recording or transcript, and then evaluate how the human, AI agent, or bot interacted with the customer. You can then use those insights to improve the customer experience by providing agent coaching feedback and optimizing bots, AI agents and self-service workflows.

**To evaluate performance**

1. Log in to Amazon Connect with a user account that has [permissions to perform evaluations](evaluation-and-coaching-permissions.md). 

1. Access the contact that you want to evaluate. There are a few ways you can do this. For example, someone may have shared the contact URL with you, or assigned you a task that has the URL. Or, you may have the contact ID, which lets you search for the contact record by doing the following: on the navigation pane, choose **Analytics and optimization**, **Contact search**, and then search for the contact that you want to evaluate.

1. On the **Contact details** page, choose **Evaluations** or the **<** icon.  
![\[The Contact details page, the Evaluations button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-evaluatebutton.png)

1. The **Evaluations** panel lists any evaluations that are in progress or completed for the contact.  
![\[The evaluations pane, the status of two evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-startevaluation.png)

1. To start an evaluation, choose an evaluation form from the dropdown menu, and then choose **Start evaluation**. If you have not set up an evaluation form yet, then you will need to do so beforehand. For more information, see [Create an evaluation form](create-evaluation-forms.md).

1. To navigate an especially long evaluation form, use the arrows next to each section to collapse or expand it.   
![\[The evaluations pane, the arrow to collapse or expand a section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-exampleevaluation.png)

1. Choose **Save** to save a form in progress. The status of the form becomes **Draft**. You can return to it any time to continue, or you can delete it and start over.  
![\[The evaluations pane, the status of an evaluation set to draft.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-draft.png)

1. When you're done, choose **Submit**. If you have skipped optional questions in the form, you will see a warning asking you to confirm that you want to submit the evaluation. Choose **Yes**. The evaluation is now **Completed**.  
![\[Skip optional questions and submit the evaluation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-draft-submit.png)

# Assign security profile permissions for performance evaluations and coaching
<a name="evaluation-and-coaching-permissions"></a>

To allow users to create, automate, and access evaluation forms, assign the following **Analytics and optimization** security profile permissions: 
+ **Evaluation forms - manage form definitions**: Allows admins and managers to [create](create-evaluation-forms.md) and [manage](evaluationform-audit-trail.md) evaluation forms.
+ **Evaluation forms - perform contact evaluations**: Allows a user, such as a Quality Assurance team member, to use an evaluation form to review a contact. For an example image, see [Evaluate agent and self-service interaction performance in Amazon Connect](evaluations.md). 

  This permission allows users to [search](search-evaluations.md) evaluations by evaluation form, score, last updated date/range, evaluator, and status. It also allows them to view the evaluation form audit trail.
  + **View** permission enables users to view submitted evaluations. This permission enables users to see evaluations on any contacts they have access to (unless [restricted by tag-based access control](https://docs.aws.amazon.com/connect/latest/adminguide/tag-based-access-control-performance-evaluations.html)). You can grant this permission to users who perform evaluations (such as managers). 
  + **Create** permission enables users to create new evaluations, view and edit draft evaluations. 
  + **Edit** permission enables users to edit submitted evaluations.
  + **Delete** permission enables users to delete both draft and submitted evaluations.
+ **Evaluation forms - view my received evaluations**: Allows agents to search for and view completed evaluations that they have received. This does not grant access to evaluations in draft, under review or part of calibrations. Access to an evaluation will be subject to [tag-based access control ](https://docs.aws.amazon.com/connect/latest/adminguide/tag-based-access-control-performance-evaluations.html). 
+ **Rules**: Permissions to create, view, edit, and delete rules are required to [automatically categorize contacts](rules.md) based on certain agent behaviors and customer outcomes. These contact categories can be used to [configure automation](create-evaluation-forms.md#step-automate) on evaluation forms. In addition, rules permissions are needed to [create a rule to submit automated evaluations](contact-lens-rules-submit-automated-evaluation.md).
+ **Evaluation forms - ask AI assistant**: Provides access to the **Ask AI** button while performing evaluations. The **Ask AI** button enables the user to get [generative AI-powered recommendations](generative-ai-performance-evaluations.md) for answers to questions in evaluation forms.
+ **Evaluation forms - manage calibration sessions**: Allows admins to create and manage calibration sessions to drive consistency and accuracy in how managers evaluate agent performance.
+ **Sample contacts**: Allows managers to randomly sample agents' contacts for evaluation. For example, a manager can select all agents in his hierarchy, and get 5 random contacts per agent from the last week for evaluation.

To allow users to manage or access coaching sessions, assign the following **Analytics and optimization** security profile permissions: 
+ **Coaching - my coaching sessions**: Access coaching sessions where you are assigned as a coach or a participant.
  + **View**: View coaching sessions where you are the coach or the participant. If you are the participant, you can acknowledge the coaching session with this permission.
  + **Create**: Create new coaching sessions with yourself as the coach.
  + **Edit**: Edit coaching sessions where you are the coach.
  + **Delete**: Delete coaching sessions where you are the coach.
+ **Coaching - manage coaching sessions**: Access coaching sessions performed by yourself or others. This permission is for admins or quality managers.
  + **View**: View any coaching session.
  + **Create**: Create new coaching sessions. You can choose yourself as the coach or assign other users as the coach.
  + **Edit**: Edit any coaching session.
  + **Delete**: Delete any coaching session.

The **Admin** security profile has these permissions by default. 

For information about how to add more permissions to an existing security profile, see [Update security profiles in Amazon Connect](update-security-profiles.md).

# View an evaluation audit trail in Amazon Connect
<a name="evaluation-audit-trail"></a>

 An evaluation can be amended and submitted multiple times. When an evaluator submits changes to an existing evaluation, managers can view an audit trail that records:
+ Who submitted the original evaluation
+ Who re-submitted the evaluation
+ What changes they made (for example, changing answers or answer notes in an evaluation)

Contact center managers can use this information to perform internal audits and uncover opportunities to improve consistency across evaluators.

**To view an evaluation audit trail**

1. Log into Amazon Connect with a user account that has **Analytics and optimization ** - ** [Evaluation forms - perform evaluations](evaluation-and-coaching-permissions.md)** permission on their security profile. 

1. Access a contact with an evaluation that was edited after it was submitted.

1. Choose the evaluation you want to investigate. The following image shows the **Evaluations** page with a link to a completed evaluation.  
![\[A link to a completed evaluation that you can choose to view the audit trail.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-audit-example.png)

1. The **Overview** section of the evaluation contains **Change history**. It indicates the number of times the evaluation has been submitted. Choose the link as shown in the following image.  
![\[The Change history property.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-audit-change-history.png)

1. You can view the audit trail of subsequent submissions after the initial submission. Choose the arrow next to a re-submission to view details of the edits. The following image shows an example of an audit trail of that were made to an evaluation after it was submitted.  
![\[An audit trail of an evaluation that was changed after it was submitted.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-audit.png)

# Create an evaluation form in Amazon Connect
<a name="create-evaluation-forms"></a>

In Amazon Connect, you can create [many different evaluation forms](feature-limits.md#evaluationforms-feature-specs). For example, you may need a different evaluation form for each business unit, and for different queues. You can also create different evaluation forms for evaluating the agent interaction and the self-service interaction with a Lex bot or AI agent.

Each form can contain multiple sections and questions. 
+ You can assign [weights](about-scoring-and-weights.md) to each question and section to indicate how much their score impacts the overall score of the evaluation form.
+ You can configure automation on each question so that answers to those questions are automatically filled using insights and metrics from Contact Lens conversational analytics.

This topic explains how to create a form and configure automation using the Amazon Connect admin website. To create and manage forms programmatically, see [Evaluation actions](https://docs.aws.amazon.com/connect/latest/APIReference/evaluation-api.html) in the *Amazon Connect API Reference*.

**Topics**
+ [Step 1: Create an evaluation form with a title](#step-title)
+ [Step 2: Add sections and questions](#step-sections)
+ [Step 3: Add answers](#step-answers)
+ [Step 4: Conditionally enable questions](#step-conditionally-enable-questions)
+ [Step 5: Assign scores and ranges to answers](#step-assignscores)
+ [Step 6: Enable automated evaluations](#step-automate)
+ [Step 7: Preview the evaluation form](#step-preview)
+ [Step 8: Assign weights for final score](#step-weights)
+ [Step 9: Activate an evaluation form](#step-activateform)

## Step 1: Create an evaluation form with a title
<a name="step-title"></a>

The following steps explain how to create or duplicate an evaluation form and set a title.

1. Log in to Amazon Connect with a user account that has the following security profile permission: **Analytics and Optimization** - **Evaluation forms - manage form definitions** - **Create**.

1. Choose **Analytics and optimization**, then choose **Evaluation forms**. 

1. On the **Evaluation forms** page, choose **Create new form**. 

   —or—

   Select an existing form and choose **Duplicate**.

1. Enter a title for the form, such as *Sales evaluation*, or change the existing title. Add any tags to the form for controlling access to the form (see [ Set up tag-based-access controls on performance evaluations](https://docs.aws.amazon.com/connect/latest/adminguide/tag-based-access-control-performance-evaluations.html)) When finished, choose **Ok**.   
![\[The evaluation forms page, the set form title section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-title.png)

   The following tabs appear at the top of the evaluation form page:
   + **Sections and questions**. Add sections, questions, and answers to the form.
   + **Scoring**. Enable scoring on the form. You can also apply scoring to sections or questions.

1. Choose **Save** at any time while creating your form. This enables you to navigate away from the page and return to the form later.

1. Continue to the next step to add sections and questions.

## Step 2: Add sections and questions
<a name="step-sections"></a>

1. While on the **Sections and questions** tab, add a title to the section 1, for example, *Greeting*.   
![\[The evaluation form page, the sections and queues tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-greetingtitle.png)

1. Choose **Add question** to add a question. 

1. In the **Question title** box, enter the question that will appear on the evaluation form. For example, *Did the agent state their name and say they are here to assist?*   
![\[The evaluation form page, the question title box.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-greetingquestion1.png)

1. In the **Instructions to evaluators** box, add information to help the evaluators or generative AI to answer the question.

   For example, for the question *Did the agent try to validate the customer identity?* you may provide additional instructions such as, *The agent is required to always ask a customer their membership ID and postal code before addressing the customer's questions*.

1. In the **Question type** box, choose one of the following options to appear on the form:
   + **Single selection**: The evaluator can choose from a list of options, such as **Yes**, **No**, or **Good**, **Fair**, **Poor**.
   + **Multiple selection**: The evaluator can choose multiple answers from a list of options, such as list of products that the customer was interested in purchasing, or non-compliant agent behaviours. 
   + **Text field**: The evaluator can enter free form text. 
   + **Number**: The evaluator can enter a number from a range that you specify, such as 1-10. 
   + **Date**: The evaluator can choose a date as an answer. 

1. Continue to the next step to add answers.

## Step 3: Add answers
<a name="step-answers"></a>

1. On the **Answers** tab, add answer options that you want to display to evaluators, such as **Yes**, **No**.

1. To add more answers, choose **Add option**. 

   The following image shows example answers for a **Single selection** question.  
![\[The Answers tab, the "Add option" command.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-greetingquestion1-answer.png)

   The following image shows an answer range for a **Number** question.  
![\[The Answers tab, the Min value and Max value boxes.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-questionscoring4.png)

1. You can also mark a question as optional. This enables managers to skip the question (or mark it as **Not applicable**) while performing an evaluation.   
![\[The option to mark a question "not applicable".\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-questionscoring-not-applicable.png)

## Step 4: Conditionally enable questions
<a name="step-conditionally-enable-questions"></a>

Evaluation forms can have questions that are conditionally enabled or disabled, based on answers to other questions. For example, you can configure a follow-up question to appear in the form only if it is needed.

1. Choose a question that needs a follow-up question. The question type must be **Single selection** or **Multiple selection**, and it must be not be an optional question (do not select the ** Optional question** checkbox).

   For example, in the following image, question 1.1 is *What was the reason for the call?* and the **Optional question** checkbox is not selected.   
![\[The Question type is Single selection and the Optional question checkbox is not selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions1.png)

1. Add a follow-up question and now select the **Optional question** checkbox.

   In the following image, the follow-up question is question 1.2 *Did the agent check if the customer attempted new account registration online?* and the **Optional question** checkbox is selected.   
![\[A follow up question, and the Optional question checkbox is selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions2.png)

1. Choose the **Conditionally enable question** tab and then turn on **Conditional question**. The toggle is shown in the following image.   
![\[The Conditionally enable question tab, the Conditional question toggle.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions3.png)

1. Configure the follow-up question to be enabled only if answer to question 1.1. *What was the reason for the call?* is **New account registration**. These options are shown in the following image.  
![\[The Conditional question is one of Other.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions4.png)

   With this configuration, the follow-up question *Did the agent check if the customer attempted new account registration online?* is dynamically added to the form only if the answer to *What was the reason for the call?* is **New account registration**. In all other cases this question is not present in the form and does not need to be answered.

1. To verify that this configuration works as expected, use the **Preview** action. 

Following are a few things to keep in mind when creating conditional questions:
+ When a question is conditionally enabled, it is by default disabled.
+ When a question is conditionally disabled, it is by default enabled.
+ You can only use **Single selection** or ** Multiple selection** questions to conditionally enable or disable other questions. The question cannot be optional.
+  You can choose one or more answer options to trigger the condition of a conditional question. 

**Note**  
If Gen AI-powered automation is enabled on a question that is conditionally enabled, then the use of Gen AI on that question counts towards the usage limit of questions that can be evaluated on a contact using Gen AI. It counts even if the question was conditionally disabled.  
For the default limit of the **Number of evaluation questions that can be answered automatically on a contact using generative AI**, see [Contact Lens service quotas](amazon-connect-service-limits.md#contactlens-quotas). 

## Step 5: Assign scores and ranges to answers
<a name="step-assignscores"></a>

1. Go to the top of the form. Choose the **Scoring** tab, and then select the **Enable scoring** checkbox.  
![\[The evaluation forms page, the scoring tab, the Enable scoring checkbox.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-enablescoring.png)

   This enables scoring for the entire form. It also enables you to add ranges for answers to **Number** question types.

1. Return to the **Sections and questions** tab. Now you have the option to assign scores to **Single selection**, and add ranges for **Number** question types.  
![\[The Sections and questions tab, the scoring tab specific to the question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoring-feature.png)

1. When you create a **Number** type question, on the **Scoring** tab, choose **Add range** to enter a range of values. Indicate the worst to best score for the answer. 

   The following image shows an example of ranges and scoring for a **Number** question type.   
![\[The Scoring tab specific to the question, the answer ranges.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-questionscoring5.png)
   + If the agent interrupted the customer 0 times, they get a score of 10 (best).
   + If the agent interrupted the customer 1-4 times, they get a score of 5. 
   + If the agent interrupted the customer 5-10 times, they get a score of 1 (worst). 
**Note**  
You can configure a score of **0 (Automatic fail)** for an answer option. You can choose to apply **Automatic fail** to the section, the subsection, or the entire form. This means that selecting the answer on an evaluation will assign a score of zero to the corresponding section, the subsection, or the entire form. The **Automatic fail** option is shown in the following image.  

![\[The Automatic fail option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automaticfail.png)


1. After you assign scores to all the answers, choose **Save**.

1. When you're finished assigning scores, continue to the next step to automate the question of certain questions, or continue to [preview the evaluation form](#step-preview). 

## Step 6: Enable automated evaluations
<a name="step-automate"></a>

Amazon Connect enables you to automatically answer questions within evaluation forms (for example, did the agent adhere to the greeting script?) using insights and metrics from conversational analytics. Automation can be used to:
+ **Assist evaluators with performance evaluations**: Evaluators are provided with automated answers to questions on evaluation forms while performing evaluations. Evaluators can override automated answers before submission.
+ **Automatically fill and submit evaluations**: Administrators can configure evaluation forms to automate responses to all questions within an evaluation form and automatically submit evaluations for up to 100% of customer interactions. Evaluators can edit and re-submit evaluations (if needed).

The ways of automation vary by whether you are evaluating the agent interaction or automated interaction (for example, self-service while interacting with a Lex bot or AI agent). You can choose between agent and automated interaction by choosing the **Additional settings**, under **Contact interaction type**.

Both for assisting evaluators, and for automated submission of evaluations, you need to first set up automation on individual questions within an evaluation form. Amazon Connect provides three ways of automating evaluations:
+ **Contact categories**: *Single selection* questions (for example, did the agent properly greet the customer (Yes/ No)?), and *Multiple selection* questions (for example, what parts of the greeting script did the agent state correctly?) can be automatically answered using contact categories defined with rules. For more information, see [Create Contact Lens rules using the Amazon Connect admin website](build-rules-for-contact-lens.md).
+ **Generative AI**: Both *Single selection* and *Text field* questions can be automatically answered using generative AI.
**Note**  
Currently integrated generative AI cannot be used to automate evaluations of self-service (automated) interactions with Lex bots and AI agents.
+ **Metrics**: *Numeric* questions (for example, what was the longest that the customer was put on hold?) can be automatically answered using metrics such as longest hold time, sentiment score, etc.

Following are examples of each type of automation for each type of question.

**Example automation for a Single selection question using Contact Lens categories**
+ The following image shows that the answer to the evaluation question is yes when Contact Lens has categorized the contact with a label **ProperGreeting**. To label contacts as **ProperGreeting**, you must first setup a rule that detects the words or phrases expected as part of a proper greeting, for example, the agent mentioned "Thank you for calling" in the first 30 seconds of the interaction. For more information, see [Automatically categorize contacts](rules.md).  
![\[A question section, the automation tab with Contact Lens categories.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation1.png)

  For information about setting up contact categories, see [Automatically categorize contacts](rules.md).

**Example automation for an *optional* Single selection question using contact categories**
+ The following image shows example automation of an optional Single selection question. The first check is whether the question is applicable or not. A rule is created to check whether the contact is about opening a new account. If so, the contact is categorized as **CallReasonNewAccountOpening**. If the call is not about opening a new account, the question is marked as **Not Applicable**.

  The subsequent conditions run only if the question is applicable. The answer is marked as **Yes** or **No** based on the contact category **NewAccountDisclosures**. This category checks whether the agent provided the customer with disclosures about opening a new account.  
![\[A question section, the automation tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation1a.png)

  For information about setting up contact categories, see [Automatically categorize contacts](rules.md).

**Example automation for an *optional* Single selection question using Generative AI**
+ The following image show example automation using Generative AI. Generative AI will automatically answer the evaluation question by interpreting the question title and evaluation criteria specified in the instructions of the evaluation question, and using it to analyze the conversation transcript. Using complete sentences to phrase the evaluation question and clearly specifying the evaluation criteria within the instructions improves accuracy of generative AI. For information, see [Evaluate agent performance in Amazon Connect using generative AI](generative-ai-performance-evaluations.md).  
![\[A question section, the generative AI Contact Lens option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation-genai.png)

**Example automation for a Multiple selection question using Contact Lens categories**
+ Multiple selection questions can be used to capture answer reasoning for a single select question. It can also be used to trigger conditional questions, by checking for customer scenarios, such as call reasons. The following example shows how you can leverage rules that capture customer call reasons to automatically fill answers to a multiple selection question. Unlike single select questions, all of the conditions are executed sequentially to answer a multiple selection question. In the below example, if the categories **StatusCheck** and ** ChangeExistingRequest** are both present on the contact, then the answer would be both “Checking status of existing service request” and “Changing a service request”.  
![\[A question section, the automation tab with Contact Lens categories.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation1b.png)

  For information about setting up contact categories, see [Automatically categorize contacts](rules.md).

**Example automation for a Numeric question**
+ If the agent interaction duration was less than 30 seconds, score the question as a 10.   
![\[A question section, the scoring tab, a numeric question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation2.png)
+ On the **Automation** tab, choose the metric that is used to automatically evaluate the question.  
![\[A question section, the automation tab, a metric to automatically evaluate the question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation3.png)
+ You can automate responses to numeric questions using Contact Lens metrics (such as sentiment score of the customers, non-talk time percentage, and number of interruptions) and contact metrics (such as longest hold duration, number of holds, and agent interaction duration).

After an evaluation form is activated with automation configured on some of the questions, then you will receive automated responses to those questions when you start an evaluation from within the Amazon Connect admin website.

**To automatically fill and submit evaluations**

1. Set up automation on every question within an evaluation form as previously described.

1. Turn on **Enable fully automated submission of evaluations** before activating the evaluation form. This toggle is shown in the following image.  
![\[The Enable fully automated evaluations toggle set to On.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation4.png)

1. Activate the evaluation form.

1. Upon activation you will be asked to create a rule in Contact Lens that submits an automated evaluation. For more information, see [Create a rule in Contact Lens that submits an automated evaluation](contact-lens-rules-submit-automated-evaluation.md). The rule enables you to specify which contacts should be automatically evaluated using the evaluation form.

## Step 7: Preview the evaluation form
<a name="step-preview"></a>

The **Preview** button is active only after you have assigned scores to answers for all of the questions.

![\[The evaluation form page, the preview button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-previewbutton.png)


The following image shows the form preview. Use the arrows to collapse sections and make the form easier to preview. You can edit the form while viewing the preview, as shown in the following image.

![\[The preview of the evaluation form.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-previewmode.png)


## Step 8: Assign weights for final score
<a name="step-weights"></a>

When scoring is enabled for the evaluation form, you can assign *weights* to sections or questions. The weight raises or lowers the impact of a section or question on the final score of the evaluation.

![\[The evaluation form page, the scoring tab, the score weights section, the question option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoring.png)


### Weight distribution mode
<a name="weight-distribution-mode"></a>

With **Weight distribution mode**, you choose whether to assign weight by section or question: 
+ **Weight by section**: You can evenly distribute the weight of each question in the section.
+ **Weight by question**: You can lower or raise the weight of specific questions.

When you change a weight of a section or question, the other weights are automatically adjusted so the total is always 100 percent.

For example, in the following image, question 2.1 was manually set to 50 percent. The weights that display in italics were adjusted automatically. In addition, you can turn on **Exclude optional questions from scoring**, which assigns all optional questions a weight of zero and redistributes the weight among the remaining questions.

![\[Score weights for a question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-weightdistribution3.png)


## Step 9: Activate an evaluation form
<a name="step-activateform"></a>

Choose **Activate** to make the form available to evaluators. Evaluators will no longer be able to choose the previous version of the form from the dropdown list when starting new evaluations. For any evaluations that were completed using previous versions, you will still be able to view the version of the form on which the evaluation was based on.

If you are still working on setting up the evaluation form and want to save your work at any point you can choose **Save**, **Save draft**.

If you want to check whether the form has been correctly set up, but not activate it, select **Save**, **Save and validate**.

# Set up tag-based-access controls on performance evaluations
<a name="tag-based-access-control-performance-evaluations"></a>

Amazon Connect enables businesses to restrict access to specific performance evaluation forms, preventing unauthorized access to evaluation form templates and completed evaluations. Businesses can provide managers access to modify or use only the evaluation form templates that are relevant to their business line or function, improving security and making it easier for managers to select the right form while completing evaluations. Additionally, both managers and agents can be restricted from viewing certain completed evaluations. For example, you can restrict agents from viewing test evaluations filled with a form template that is yet to be finalized.

You can start by tagging evaluation forms, for example "Department: New customer". When you tag an evaluation form, all subsequent evaluations filled with the evaluation form also carry the same tag. You can then enable tag-based access controls to evaluation forms and evaluations within the security profiles of users for whom you wish to restrict access to specific evaluation forms and evaluations. Once tag-based-access control on evaluation forms is enabled, users will be able to modify only specific evaluation forms on the **Evaluation forms** page. On Contact Search, users will only be able to search for evaluation forms for which they have access, and use the evaluation forms to start evaluations. Similarly within Amazon Connect **Dashboards**, users will only be able to view aggregated scores for evaluation forms for which they have access. Tag-based access control on evaluations restricts users to only be able to view specific evaluations on the **Contact Details** page. For example, if a specific evaluation should only be visible to certain personas, such as fraud investigation, then you can restrict agents from viewing those evaluations on the Contact Details page.

**Important Notes**  
Once you enable tag based access control on evaluations, the users will lose access to any evaluations prior to tagging the evaluation form. If you are already using performance evaluations, we recommend to first tag evaluation forms and accumulate evaluations over several months, prior to enabling tag based access to evaluations.
It is recommended to use a single tag on an evaluation form (e.g. "Department: New customer") while configuring tag-based access. While assigning and permitting access on multiple tags is possible, it creates complexity. This is discussed in more detail below.

## Tagging evaluation forms
<a name="tagging-evaluation-forms"></a>

You can tag evaluation forms while creating a new evaluation form, or by updating an existing evaluation form. The tags that you can add to an evaluation form will depend on tag-based-access control granted on your security profile(s):
+ If your security profile has no tag-based access controls configured for evaluation forms, then you can create or update a form with any tag(s).
+ If you have one security profile with tag-based-access control enabled on an evaluation forms, then evaluation form tags from your security profile will be added automatically while creating evaluation forms through the Amazon Connect UI. You will not be able to update tags on evaluation forms in this scenario.
+ If you have multiple security profiles, you must add all the tags from one of your security profiles to the evaluation form while creating or updating an evaluation form. For example, if one of your security profile grants you access to "Department: Sales" and another grants you access to "Department: Retention", then you must add either the "Department: Sales" or "Department: Retention" tag on the evaluation form. While creating an evaluation form, tags from one of your security profiles will be automatically added.

Below are the steps to add tags to an evaluation form.

**While creating an evaluation form**
+ You will be prompted to add tags to an evaluation form when you create it (see [Create an evaluation form](create-evaluation-forms.md)).  
![\[The evaluation forms page, the set form title section with tags field.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-title.png)

**While editing an evaluation form**

1. Open the evaluation form with a security profile that has the permission **Evaluation forms - manage form definitions** - **Edit**.

1. Click on the edit icon next to the Tags.  
![\[The edit tags icon in the evaluation form.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-tags-edit-form-tags.png)

1. Update the tags.  
![\[The update tags dialog.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-tags-update-form-tags.png)

**Note**  
Tag changes are applied immediately to all versions of the form. Updating tags does not require you to save or activate the form.

## Tag inheritance from evaluation forms to evaluations
<a name="tag-inheritance-evaluation-forms"></a>

While creating an evaluation from the Amazon Connect UI, the tags from the evaluation form are copied over to the evaluation upon creation. For example, if the evaluation form is tagged as "Department: Sales" then the evaluation created with this evaluation will also carry the same tag. If the evaluation form contains multiple tags (Department: Sales, Product: Dishwasher) then those will also be carried over to the evaluation provided you have access to create an evaluation with those tags (discussed in more detail in the next section).

**Note**  
Tags are copied over only to new evaluations. If you have existing evaluations, then adding or updating tags on evaluation forms will not change evaluations on historically completed evaluations.

## Set up tag-based access to evaluation forms and evaluations
<a name="setup-tag-based-access-control"></a>

1. Login to **Amazon Connect** with a user profile that has access to **Security Profiles - View** and **Edit** permissions.

1. Go to the **Users > Security Profiles** page within security profiles, and select a security profile that you want to modify.

1. Click **Show advanced options**.

1. Select **Allow: Tag-based access control**.

1. Under resources, select **Evaluation forms** and **Contact Evaluations**.

1. Enter the tag that you want to restrict the users' security profile to.  
![\[The tag-based access control setup screen.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-tags-tbac-setup.png)

If you have existing evaluations, then enabling tag-based access to contact evaluations will result in individuals who already have access to evaluations losing access to historical evaluations. To retain access to historical evaluations you can:
+ Start by tagging forms. This would result in any evaluations performed subsequently carrying the same tag. Once you have accumulated several months' evaluations you can enable tag-based-access.
+ Your technical administrator can use the [TagResource](https://docs.aws.amazon.com/connect/latest/APIReference/API_TagResource.html) API to tag any historical evaluations.
+ Enable tag-based access on **evaluation forms** but not **contact evaluations**. This may be desirable in situations where there is already security that limits access to which contacts are accessible. For example, supervisors may already be restricted to access contacts within their own hierarchy, and you may want to grant your supervisors access to all evaluations on those contacts.

If you have enabled tag-based access control on **Contact Evaluations**, it is recommended to have consistency with tag-based-access on the **Evaluation Forms**. It is also recommended that users' security profiles have access to all tags on the form(s) that they need to use. For example, if a user is to use a form with tags "Department: New customer", "Product: Auto Insurance", the security profile of the user should have access control enabled for both these tags across both **Evaluation Forms** and **Contact Evaluations**. If they have only one of the tags, then creating an evaluation manually in the UI will fail.

## Restricting access to automated evaluation forms under testing
<a name="tag-based-access-automated-evaluation-forms-testing"></a>

Tag-based-access-control can be used to run automated evaluation tests in production, without revealing evaluation results to agents and supervisors. This is useful if you are already using evaluation forms in production. An example setup is as follows:
+ On the **Evaluation forms** page, tag evaluation forms that are live and should be visible to agents and supervisors as "Live: Yes"
+ On **Users > Security Profiles**, you can turn on tag-based access control on **Evaluation Forms** and **Evaluations**, restricting agent and supervisors access to forms with the tag "Live:Yes"
**Note**  
Before enabling tag-based-access-control, you may want sufficient history to accumulate, e.g. 2 months of evaluations, as this would result in a loss in historical evaluations
+ Automated evaluation forms that are still under testing can be tagged as "Live:No", preventing them from being visible by agents and supervisors
+ Quality managers responsible for creating evaluation forms can be granted access to evaluation forms without tag-based restrictions. Alternatively, you can assign two security profiles to quality managers:
  + The first would grant them access to **Evaluation Forms** and **Evaluations** with the tag "Live: No"
  + The second would grant them access to **Evaluation Forms** and **Evaluations** with the tag "Live: Yes"
+ Once you are ready to go live with automated evaluations, you can duplicate the form, and change the tag to "Live: Yes". The original form when it was under testing should continue carrying the tag "Live: No". This ensures that supervisors and agents cannot see historical aggregated evaluation scores in **Dashboards** when the form was under testing.

## Tag Based Access Control while setting up rules to submit automated evaluations
<a name="tag-based-access-automated-evaluations"></a>

You can only create a rule to submit automated evaluations using a form that you have access to. For example, suppose there is an automated evaluation form **Auto Insurance Sales Scorecard** with the tags "Department: New customer", "Product: Auto Insurance", and your security profile grants you access to the tag "Department: New customer" for evaluation forms. Then you would be able to setup a rule to auto-submit evaluations using the form **Auto Insurance Sales Scorecard**.

## Tag Based Access Control while setting up Calibration Sessions
<a name="tag-based-access-calibration-sessions"></a>

As an administrator of a calibration session, you can only create a calibration session with evaluation forms that you have access to.

# View an evaluation form audit trail in Amazon Connect
<a name="evaluationform-audit-trail"></a>

1. Select the evaluation form that you want to research.  
![\[The evaluation forms page, a box to the left of an evaluation form.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-select.png)

1. At the bottom of the page, under **Example Evaluation**, use the dropdown menu to view previous versions, who accessed them, and when. The following image shows an example audit trail.   
![\[An example audit trail for an evaluation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-version.png)

1. Optionally, choose one of the forms to open it.

## What do Active, Draft, and Locked mean?
<a name="evaluationform-active-draft-locked"></a>

An form is in one of the following states:
+ **Active**. A published version of the form that is available to evaluators.
+ **Draft**. An inactive, locked version of the form. A draft is unlocked only when you are working on it.
+ **Locked**. An evaluation form is locked when you activate or publish it. Even after you deactivate the form, it stays locked, and becomes a historical version of the form. However, you can activate the historical version to save it as new version. 

# Evaluate agent performance in Amazon Connect using generative AI
<a name="generative-ai-performance-evaluations"></a>

**Note**  
**Powered by Amazon Bedrock**: AWS implements automated abuse detections. Because generative AI features in Contact Lens are built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI).

 Managers can specify their evaluation criteria in natural language, and use generative AI for automating evaluations of up to 100% of customer interactions. Generative AI can enable you to automate evaluations of additional agent behaviors (for example, was the agent able to resolve the customer’s issue?), enabling managers to comprehensively monitor and improve regulatory compliance, agent adherence to quality standards and sensitive data collection, while reducing the time spent on evaluating agent performance. Along with answers, you are also provided with context and justification, and references to specific points in the transcript that you can use to provide agent coaching.

You can use generative AI to assist managers with filling evaluations or use it to automatically fill and submitting evaluations. For more information about setting up automated evaluations, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

Evaluations questions are answered using generative AI by interpreting the question title and evaluation criteria specified within the instructions to evaluators associated with each question, and using these to analyze the conversation transcript. For more information, see [Step 2: Add sections and questions](create-evaluation-forms.md#step-sections).

## Process to automate evaluations using generative AI
<a name="cl-genai-overall-process"></a>

The following is the overview of the automation process:

1. Get a high-level understanding of which of the evaluation questions should be answered with generative AI by reading [Guidelines to improve generative AI accuracy](#guidelines-to-improve-generative-ai-accuracy).

1. Assign permissions to select users within your quality management team to use Ask AI assistant. These users will start seeing the Ask AI button next to each question, while performing evaluations and can use that to get answer recommendations. These users can provide feedback on which questions are receiving accurate answers using generative AI. For more information, see [Assign security profile permissions for performance evaluations and coaching](evaluation-and-coaching-permissions.md).

1. To improve accuracy, you can provide additional evaluation criteria within [instructions to evaluators](create-evaluation-forms.md#step-sections). For more information, see [Guidelines to improve generative AI accuracy](#guidelines-to-improve-generative-ai-accuracy).

1. Once you have a good understanding of which questions can be accurately answered with generative AI, you can do a broader rollout by pre-configuring on the evaluation form, whether a question will receive an automated answer using generative AI.

1. Once you have setup automation, any user performing evaluations using the evaluation form will get automated generative AI answers to the pre-configured questions (without requiring additional permissions). For more information, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

1. You can setup automation such that an evaluator first reviews the generative AI answers before submission or you can automatically fill and submit evaluations. 

## Use Ask AI to get generative AI answer recommendations
<a name="get-generative-ai-powered-recommendations"></a>

1.  Log into Amazon Connect with a user account that has [permissions to perform evaluations](evaluation-and-coaching-permissions.md) and [ask AI assistant](evaluation-and-coaching-permissions.md). 

1.  Choose the **Ask AI** button below a question to receive a generative AI-powered recommendation for the answer, along with context and justification (reference points from the transcript that were used to provide answers). 

   1.  The answer will get automatically selected based on the generative AI recommendation, but can be changed by the user.  

   1.  You can get generative AI-powered recommendations by choosing **Ask AI** for up to 10 questions per contact. For more information, see [Contact Lens service quotas](amazon-connect-service-limits.md#contactlens-quotas).

1.  You can choose the time associated with a transcript reference to be directed to the point in the conversation   
![\[Generative AI-powered recommendations while evaluating agent performance.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/get-generative-ai-powered-recommendations-performance.png)

## Provide additional criteria for answering evaluation form questions using generative AI
<a name="provide-criteria-for-answering-evaluation-form-questions"></a>

 While configuring an evaluation form, you can provide criteria for answering questions within the **instructions to evaluators** associated with each evaluation form question. Apart from driving consistency in evaluations by evaluators, these instructions are also used to provide generative AI-powered evaluations. 

![\[New account opening scorecard.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/provide-criteria-for-answering-evaluation-form-questions.png)


## Set up automated evaluations using generative AI on the evaluation form
<a name="set-up-automated-evals-on-eval-form-with-generative-ai"></a>

You can pre-configure on an evaluation form whether a question will be automatically answered using generative AI. Then, if you start an evaluation using the evaluation form on the Amazon Connect UI, answers to these questions will get automatically filled using generative AI (without requiring you to click Ask AI). You can also use generative AI to automatically fill and submit evaluations. For automatically submitted evaluations, you can use generative AI to answer up to 10 questions per contact (see [Contact Lens service quotas](amazon-connect-service-limits.md#contactlens-quotas)). Note that this limit does not apply to automation using contact categories or metrics (for example, longest hold duration, etc.).

To learn more about setting up automated evaluations using generative AI, see [Guidelines to improve generative AI accuracy](#guidelines-to-improve-generative-ai-accuracy).

## Set up generative AI-powered evaluations in non-English languages
<a name="set-up-generative-ai-evals-in-non-english-language"></a>

By default, if you do not set the language of an evaluation form, the generative AI model automatically detects the language of your evaluation form questions and tries to provide answers in the same language, if the AI model understands that language. By default, generative AI answer justifications are typically provided in English.

To consistently receive both AI-generated answers and answer justifications in your preferred language, you can set the language of an evaluation form, choosing from **English**, **Spanish**, **Portuguese**, **French**, **German** and** Italian**. By explicitly setting the language of an evaluation, you can also perform cross-language evaluations, where generative AI fills a evaluation form in English, even when the conversation transcript is in another language, say Spanish. This enables multilingual contact centers to use a standardized evaluation framework across languages.

To set the language of the evaluation form:

1. Select the **Additional settings** tab while creating or updating an evaluation form.

1. Choose **Form language** from the dropdown.

1. Ensure your form’s questions, instructions and answer choices are in the same language as the selected **Form language**, for optimal AI performance.

![\[The evaluation form page, the Additional settings tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-languageexample1.png)


## Guidelines to improve generative AI accuracy
<a name="guidelines-to-improve-generative-ai-accuracy"></a>

**Selecting questions for getting generative AI recommendations**

1. Use generative AI to respond to questions that can be answered using information from the conversation transcript, without the need to validate information through third-party applications such as CRM systems.

1. Using generative AI to answer questions requiring numeric responses, such as "How long did the agent interact with the customer?" is not recommended. Instead, consider [setting up automation](create-evaluation-forms.md#step-automate) for such evaluation form questions using Contact Lens or contact metrics.

1. Avoid using generative AI to answer highly subjective questions, for example, "Was the agent attentive during the call?" 

**Improving phrasing of questions and associated instructions**

1. Use complete sentences to word questions, for example, replacing *ID validation* with "Did the agent attempt to validate the customer’s identity?" enables the generative AI to better understand the question.

1.  It is recommended that you provide detailed criteria for answering the question within the **instructions to evaluators,** especially if its not possible to answer the question based on the question text alone. For example, for the question "Did the agent try to validate the customer identity?" you may want to provide additional instructions such as, *The agent is required to always ask a customer their membership ID and postal code before addressing the customer’s questions*.

1.  If answering a question requires knowledge of some business specific terms, then specify those terms in the instruction. For example, if the agent needs to specify the name of the department in the greeting, then list the required department name(s) that the agent needs to state as part of the **instructions to evaluators** associated with the question.

1.  If possible, use the term 'agent' instead of terms like 'colleague', 'employee', 'representative', 'advocate', or 'associate'. Similarly use the term 'customer', instead of terms like 'member', 'caller', 'guest', or 'subscriber'.

1. Only use double quotes in your instruction if you want to check for exact words being spoken by the agent or the customer. For example, If the instruction is to check for the agent saying `"Have a nice day"`, then the generative AI will not detect *Have a nice afternoon*. Instead the instruction should say: `The agent wished the customer a nice day`. 

# Performance evaluations of self-service interactions in Amazon Connect
<a name="performance-evaluations-automated-interactions"></a>

Amazon Connect provides you with the ability to automatically evaluate the quality of self-service interactions and get aggregated insights to improve customer experience. Managers can define custom criteria to assess the quality of self-service interactions, that can be filled manually or automatically using insights from conversational analytics, and other Amazon Connect data. For example, you can automatically assess if the AI agent repeatedly fails to understand the customer, resulting in poor customer sentiment and transfer to a human agent. Managers can review these insights in aggregate and on individual contacts, alongside self-service interaction recordings and transcripts, to identify opportunities to improve bot or AI agent performance.

**Note**  
Performance evaluations of self-service interactions is only available as part of Amazon Connect (with unlimited AI). For more information, see [Amazon Connect pricing](https://aws.amazon.com/connect/pricing/).

To automatically evaluate self-service interactions, you need to first [Enable conversational analytics in Amazon Connect Contact Lens](enable-analytics.md). Performance evaluations can evaluate the entire self-service interaction, irrespective of whether it's handled by touch tone, Lex bots, Amazon Connect AI agents or custom bots within Amazon Connect. The steps to set up automated evaluations of self-service interactions are as follows:
+ [Step 1: Create a draft evaluation form](#step-create-draft-form-self-service)
+ [Step 2: Set up automation](#step-setup-automation-self-service)
+ [Step 3: Set up a rule to automatically submit evaluations of self-service interactions](#step-setup-rule-self-service)

## Step 1: Create a draft evaluation form
<a name="step-create-draft-form-self-service"></a>

You can define custom criteria to evaluate self-service interactions. These criteria can measure self-service resolution, customer experience or bot/AI agent behaviors.

An example evaluation form is as follows:

Section 1: Self-service success  
+ **1.1** Was the contact handled during self-service, without transferring to a human agent? (Single selection)
+ **1.2** Was the customer able to self-serve at least one of their needs? (Single selection)

Section 2: Customer experience  
+ **2.1** What was the overall customer sentiment score during self-service? (Number)
+ **2.2** Did the customer express frustration during self-service? (Single selection)

Section 3: AI agent behaviors  
+ **3.1** Did the AI agent fail to understand the customer and asked them to repeat themselves? (Single selection)
+ **3.2** Was the AI agent rude or aggressive towards the customer at any point? (Single selection)

For additional details, see [Create an evaluation form in Amazon Connect](create-evaluation-forms.md).

## Step 2: Set up automation
<a name="step-setup-automation-self-service"></a>

You can automate evaluations of self-service interactions using Amazon Connect rules (including generative AI-powered semantic match rules) and using integrated metrics such as customer sentiment. Note that currently, you cannot use the integrated generative AI within the evaluation form to automatically evaluate self-service interactions.

### Automation using rules
<a name="automation-using-rules"></a>

Start with setting up a rule:

1. On the navigation menu, choose **Analytics and optimization**, **Rules**.

1. Select **Create a rule**, **Conversational analytics**.

1. Under **When**, use the dropdown list to choose **post-call analysis** or **post-chat analysis**.

Example rules that you can create:

Self-service containment  
+ Add a new condition checking that the queue was not assigned and the contact was handled during the automated interaction.
+ You can also use natural language intent to confirm that the customer did not request for a human agent during the automated interaction with the Lex bot or AI agent.
Amazon Connect understands the following keywords within semantic match rules:  
+ **System:** Denotes a bot or AI agent
+ **Agent:** Refers to the human agent
+ **Customer:** The person interacting with the contact center
+ **Automated interaction:** Part of the customer interaction where human agent was not present on the conversation, including self-service interaction with bot or AI agent, and wait time in the queue
+ **Human agent interaction:** Customer interaction with the human agent

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-containment-rule.png)

+ If you are using a Amazon Connect AI agent, you can also check if the AI agent for self-service escalated to a human or not.

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-ai-agent-escalation-check.png)


Self-service success for at least one intent  
Create a rule using **natural language - semantic match** condition:  
"During the automated interaction, the system successfully fulfilled at least one of the customer requests, such as providing information or completing another service request."

Bot/AI agent failing to understand the customer  
Create a rule using **natural language - semantic match** condition:  
"The system failed to understand the customer and asked the customer to repeat themselves."

Customer expressed frustration  
Create a rule using **natural language - semantic match** condition:  
"Customer expressed frustration during the automated interaction."

After you set up a rule you can use it to answer single selection or multiple selection questions in your evaluation form. For example, if you created a rule to check for self-service containment, then you can use that to answer a question on whether the contact was handled during self-service.

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-use-rules-in-form.png)


### Automation using metrics
<a name="automation-using-metrics"></a>

You can use contact metrics to automatically answer questions on the self-service experience. For example, you can check for customer sentiment during the automated interaction. To use metrics, ensure that the Question Type is chosen as Number.

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-metrics-automation.png)


After you have set up automation on every question, you toggle on **Enable automated submission of evaluations** and activate the form. You would then be guided to create a rule to automatically submit the evaluation form.

For additional details, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

## Step 3: Set up a rule to automatically submit evaluations of self-service interactions
<a name="step-setup-rule-self-service"></a>

You can use the following conditions to identify specific self-service interactions.

AI Agent  
To trigger a self-service interaction evaluation, you can identify if specific AI agent(s) were active on the contact. You can also check for a specific AI agent version.  

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-ai-agent-identification.png)


Custom contact attributes and contact segment attributes  
You can also use **custom contact attributes** and **contact segment attributes** set within flows to identify specific workflows, bots, customer intents or outcomes. For example, you may set a contact attribute within flows, `pizzaOrderBot = true` if a Lex bot called "Pizza Order Bot" is invoked during the conversation.  

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-custom-contact-attributes.png)


After you have defined conditions:

1. On the **Define actions** page, provide a category name to identify the rule.

1. Choose **Add action**, select **Submit automated evaluation**, and select the form that you want to use for automatically submitting an evaluation. (This action is already selected on the page if you created the rule when you activate the form.)

For more information, see [Create a rule in Contact Lens that submits an automated evaluation](contact-lens-rules-submit-automated-evaluation.md).

# Use scoring and weights on agent evaluation forms in Amazon Connect
<a name="about-scoring-and-weights"></a>

By using *weights*, you can increase or decrease the impact of a question or section score on the overall evaluation score. 

When scoring is enabled for the evaluation form, you can assign *weights* to sections or questions. The weight raises or lowers the impact of a section or question on the final score of the evaluation.

## Example score
<a name="example-score"></a>

Let's say you are assigning the score to a question is that critically important to your business. If the answer is a Yes, the agent gets 10 points. For No they get 0 points. This is shown in the following image.

![\[The evaluation form page, the scoring tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoringexample1.png)


The answer to first question is more important to your business than the answer to *Did the agent close with "Is there anything else I can assist you with today?"*, which is also worth 0-10 points, as shown in the following image. 

![\[The evaluation form page, the scoring tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoringexample2.png)


To differentiate scores of the questions, you indicate that weight of one question is more than the other. 

The following image shows that the answer to *Did the agent recite the compliance script for the medication* is 50% of the agent's score. Whereas the answer to *Did the agent close with "Is there anything else I can assist you with today"* weighs only 5% of the score.

![\[The evaluation form page, the scoring tab, the score weights section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoringexample3.png)


The total weight must always equal 100%.

## Weight distribution mode
<a name="weight-distribution-mode"></a>

With **Weight distribution mode**, you choose whether to assign weight by section or question: 
+ **Weight by section**: You can evenly distribute the weight of each question in the section.
+ **Weight by question**: You can lower or raise the weight of specific questions.

When you change a weight of a section or question, the other weights are automatically adjusted so the total is always 100 percent.

For example, in the following image, three of the questions were manually set to 10 percent. The weights that display in italics were adjusted automatically. 

![\[Score weights for a question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-weightdistribution3.png)


## Weights of optional questions
<a name="weight-optional-questions"></a>

When a question is optional or applicable only in certain scenarios, choose **Enable "Not Applicable"** as an answer option to the question. The following image shows this setting on the **Answers** tab.

![\[The Answers tab, the Enable "Not Applicable" option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-weightsoptional.png)


After an evaluation is completed, Amazon Connect calculates the evaluation score:
+ Questions that are answered as **Not Applicable** do not count toward the form's final score. 
+ Their weight is redistributed proportionally among the remaining questions so that the total sum of weights across all questions remains 100%. 

For example, consider the following table. It represents a form with four questions (Q1, Q2, Q3, and Q4) that have weights of 40%, 20%, 20%, and 20% respectively. Each question has three answer options (A1, A2, and A3) with scores of 10, 5, and 0. An evaluation with answers Q1:A1, Q2:A2, Q3:A2, Q4:A3 would be scored as shown in the table.


| Question | Question weight | Answer | Answer score | Weighted answer score | 
| --- | --- | --- | --- | --- | 
|  Q1  |  40%  | A1  | 10  | 40%  | 
|  Q2  |  20%  | A2  | 5  | 10%  | 
|  Q3  |  20%  | A2  | 5  | 10%  | 
|  Q4  |  20%  | A3  | 0  | 0%  | 

The form's evaluation score = 40% \$1 10% \$1 10% \$1 0% = 60%.

However, if the answer to question Q4 is changed to **Not Applicable**, then the evaluation is scored as follows:


| Question | Question weight | Answer | Additional question weight | Redistributed question weight | Answer score | Weighted answer score | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Q1  |  40%  | A1  | 10% | 50% | 10  | 50%  | 
|  Q2  |  20%  | A2  | 5% | 25% | 5  | 12.5%  | 
|  Q3  |  20%  | A2  | 5% | 25% | 5  | 12.5%  | 
|  Q4  |  20%  | Not Applicable | - | - | -  | - | 

Here's what's going on:
+ Question Q4 is effectively removed from the calculation. Its weight (20%) is distributed among the remaining 3 questions in proportion to their weights.
+ Question Q1 has double the weight of questions Q2 and Q3, so it receives double the amount of added weight. 
+ The form's evaluation score = 50% \$1 12.5% \$1 12.5% = 75%.

# Notify supervisors and agents about performance evaluations
<a name="create-evaluation-rules"></a>

You can create rules that automatically send emails or tasks to supervisors and agents based on evaluation results. 
+ Supervisor notifications can drive timely coaching based on performance evaluations. For example, you can notify supervisors if an agent receives an evaluation score below a certain threshold. 
+ Agent notifications can be used to prompt agents to review and acknowledge their evaluations.

**Topics**
+ [Step 1: Define rule conditions for evaluation forms](#rule-conditions-eval)
+ [Step 2: Define rule actions](#rule-actions-eval)
+ [Example rule with multiple conditions](#rule-example-eval)

## Step 1: Define rule conditions for evaluation forms
<a name="rule-conditions-eval"></a>

1. On the navigation menu, choose **Analytics and optimization**, **Rules**.

1. Select **Create a rule**, **Evaluation forms**.

1. Under **When**, use the dropdown list to choose **A Contact Lens evaluation result is available**, as shown in the following image.  
![\[The option When an evaluation result is available.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-rule-condition.png)

1. Choose **Add condition**.   
![\[The list of conditions for when an evaluation result is available.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-rule-condition-all.png)

   You can combine criteria from a set of conditions to build very specific Contact Lens rules. The following are some of the available conditions: 
   + **Evaluation - Form score**: Build rules that run when the score for a specific evaluation form is met. 
   + **Evaluation - Section score**: Build rules that run when the score for a specific section is met. 
   + **Evaluation - Question answer**: Build rules that run when the score for a specific question and answer is met. 
   + **Evaluation - Results available**: Build rules that run on any evaluation submissions. 
   + **Agent hierarchy**: Build rules that run on a specific agent hierarchy. Agent hierarchies may represent geographical locations, departments, products, or teams.

     To see list of agent hierarchies so you can add them to rules, you need **Agent hierarchy - View** permissions in your security profile.
   + **Agent**: Build rules that run on a subset of agents. For example, receive notifications on agents belonging to your team.

     To see agent names so you can add them to rules, you need **Users - View** permissions in your security profile. 
   + **Queues**: Build rules that run on a subset of queues. Often organizations use queues to indicate a line of business, topic, or domain. For example, you could build rules specifically for the evaluations of those agents assigned to sales queues.

     To see the queue names so you can add them to rules, you need **Queues - View** permissions in your security profile. 
   + **Contact attributes**: Build rules that run on the values of custom [contact attributes](what-is-a-contact-attribute.md). For example, you can build rules for agent evaluations for a particular line of business or for specific customers, such as based on their membership level, their current country of residence, or if they have an outstanding order. 
   + **Contact segment attributes**: You can identify contacts within rules using custom contact segment attributes with values populated from other systems or using custom logic. You can [define an attribute](predefined-attributes.md#predefined-attributes-create-web-admin) and set its value in flows. Custom segment attributes are only present on that specific contact ID, and not the entire contact chain. For example, you can build a rule that identifies that customer closed their account during the conversation.

     To see the list of contact segment attributes to add to a rule, you need **Predefined attributes - View** permission.

1. Choose **Next**.

## Step 2: Define rule actions
<a name="rule-actions-eval"></a>

1. Choose **Add action**. You can choose the following actions:
   + [Create Task](contact-lens-rules-create-task.md)
   + [Send email notification](contact-lens-rules-email.md)
   + [Generate an EventBridge event](contact-lens-rules-eventbridge-event.md)  
![\[The add action dropdown menu, a list of actions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-action-no-wisdom.png)

1. Choose **Next**.

1. Review and make any edits, then choose **Save**. 

1. After you add rules, they are applied to new evaluation submissions that occur after the rule was added. You cannot apply rules to past, stored evaluations.

## Example rule with multiple conditions
<a name="rule-example-eval"></a>

The following image shows a sample rule with six conditions. If any of these conditions are met, the action is triggered.

![\[A rule with six conditions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-multiple-conditions.png)


1. **Evaluation - Form score**: Does the Compliance Form have a score greater than or equal to 50%?

1. **Evaluation - Section score**: In a Compliance Form, does the Greeting section have a score greater than or equal to 70%?

1. **Evaluation - Question score**: Does the Compliance Form question *Did the agent greet the customer properly* equal **Yes**?

1. **Evaluation - Results available**: Have any results been generated for the Compliance Form?

1. **Queues**: Is this for the **BasicQueue**?

1. **Contact attributes**: Does CustomerType equal VIP?

# Provide agent coaching in Amazon Connect
<a name="provide-coaching"></a>

Amazon Connect provides integrated coaching tools that help supervisors deliver structured, data-driven feedback to agents based on performance evaluations. For upcoming one-on-one sessions with agents, supervisors can share detailed coaching feedback with concrete examples, and set performance goals directly within Amazon Connect. Quality management teams can also assign coaching to supervisors with due dates when they identify improvement opportunities, such as showing greater empathy towards customer issues. Once coaching is completed, agents can acknowledge the feedback in Amazon Connect, ensuring that they understand next steps for improvement. Past coaching feedback is centrally accessible, making it easier for agents, supervisors, and quality managers to track agent progress over time.

**Note**  
This feature is available as part of Amazon Connect performance evaluations.

## Assign permissions for coaching
<a name="coaching-permissions"></a>

Permissions can be configured as follows:

1. **Admins and quality managers**: Provide **coaching – manage coaching sessions** permissions. These permissions grant them access to all coaching sessions in your Amazon Connect instance. With this permission, they can assign agent coaching to agents' supervisors.

1. **Supervisors**: Provide **coaching – my coaching sessions** (View, Create, Delete, Edit) permissions. These permissions enable them to create and manage agent coaching with themselves as the coach.

1. **Agents**: Provide **coaching – my coaching sessions – View** permission. This permission enables the agent to view and acknowledge coaching where they are the participant.

For more information, see [Assign security profile permissions for performance evaluations and coaching](evaluation-and-coaching-permissions.md).

## Provide coaching to agents
<a name="coaching-provide-to-agents"></a>

1. Log in to Amazon Connect with a security profile that can [search contacts](contact-search.md) and perform coaching.

1. Select **Analytics and Optimization** > **Contact search** from the navigation bar on the left.

1. From **Contact Search**, find contacts that have been evaluated for the agent that you want to coach. For example, you can find contacts where the evaluation score is less than 70%:  
![\[The Contact Search page with an evaluation score filter applied.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-evaluation-score-filter.png)

1. Open a contact that has been evaluated, and view the evaluations on the right pane.

1. Open an evaluation and click **Coach on this evaluation**.  
![\[The Coach on this evaluation button on an evaluation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-coach-on-this-evaluation-button.png)

1. You can add the entire evaluation, a specific section and/or question to a coaching session:  
![\[Adding evaluation items to a coaching session.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-add-evaluation-items.png)

1. You can link the evaluation, its sections and/or questions to an existing coaching session, or create a new session. Items can be linked as strength or growth opportunities.  
![\[The dialog for adding a question to a coaching session.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-add-question-to-coaching-dialog.png)

1. After you add an evaluation or its items for coaching, a link will be provided to view the coaching session.

1. You can link up to 10 evaluations or evaluation items to a single coaching session as examples of agent strength or growth opportunities. To link additional evaluations, repeat steps 2 through 7

1. You can edit the coaching session by specifying dates, times, and location, providing detailed feedback, and setting improvement goals on coaching topics.  
![\[The edit coaching session page with fields for dates, times, location, feedback, and goals.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-edit-coaching-session.png)
**Note**  
**Session due date** is mandatory.

1. Click **Submit** to save the coaching session as a draft.

1. When the coaching session is ready, click **Share** to make the coaching session visible to the agent. If the agent has an email configured within Amazon Connect (or has a secondary email for a SAML instance), they will receive an email notification with a link to view the coaching session.

1. At the time of coaching, you can access the coaching session on **Analytics and Optimization** > **Coaching sessions**. This page displays all past and upcoming coaching sessions.

1. After the coaching session is finished, click **Mark as Complete** and optionally add a note.

1. Agents can acknowledge the coaching along with their own coaching notes.

## Search for coaching sessions
<a name="coaching-search-sessions"></a>

You can view all past and upcoming coaching sessions from the **Analytics and Optimization** > **Coaching sessions** page.

This page provides advanced search capabilities. You can search for coaching sessions:
+ Performed by a particular coach
+ Where a specific agent was the participant
+ Created by a specific quality manager
+ On a specific topic
+ That are past due date but not completed
+ That are pending completion (shared or draft status)
+ That are completed, but not yet acknowledged by the participant
+ And more

![\[The coaching sessions search page with filter options.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-search-filters.png)


# Acknowledge performance evaluations in Amazon Connect
<a name="acknowledge-evaluations"></a>

When an agent performance evaluation is submitted, you can automatically notify the agent to review their evaluation. For example, you can set up a [rule to send an email](contact-lens-rules-email.md) to the agent when an evaluation is available. You can also walk an agent through their evaluation during coaching.

After the agent has reviewed the performance evaluation, they can acknowledge their review of the evaluation and write an optional note in the Amazon Connect admin website. This acknowledgement enables managers to track whether agents are reviewing the feedback provided on their performance evaluations.

This topic explains the steps for agents to view and acknowledge an evaluation.

**To acknowledge an evaluation**

1. After you have received a performance evaluation for a contact, use your agent account to log in to the Amazon Connect admin website at https://*instance name*.my.connect.aws/.

1. Access the contact evaluation that you want to acknowledge. There are a few ways you can do this:
   + Someone may have shared the contact URL with you.

   - OR - 
   + You may have been assigned a task or received an email notification containing the URL for the contact that received an evaluations.

   - OR - 
   + You may have the contact ID and evaluation form name. You can use this information to search for the contact that received the evaluations using the following steps.

     1. On the navigation pane, choose **Analytics and optimization**, **Contact search**.

     1. Search for the contact which was evaluated, but is not yet acknowledged. The following image shows the filters to search for **Acknowledged** = **No**.  
![\[The Filters section of the Contact search page, set to Acknowledged = No.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack1.png)

1. On the **Contact details** page, choose **Evaluations** or expand the evaluation panel by choosing the **<** icon, as shown in the following image.  
![\[The Evaluations button, and the icon to expand the evaluation pane.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack2.png)

1. The **Evaluations** panel lists any evaluations that are in progress or completed for the contact. To acknowledge an evaluation, choose an evaluation from the list of **Completed evaluations**. The following image shows one evaluation that has been completed: **Customer servicing scorecard**.  
![\[The Evaluations pane, the completed evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack3.png)

1. Choose the evaluation you want to review. At the bottom of the evaluation, choose **Acknowledge**, as shown in the following image. 
**Note**  
Only the agent who was evaluated can acknowledge the evaluation.  
![\[The Evaluations pane, the completed evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack4.png)

1. In the **Acknowledge evaluation result** dialog box, provide an optional comment. For example, *Manager walked through the evaluation during coaching on March 5th, 2025*. 

   When you're finished, choose **Confirm**.   
![\[The Acknowledge evaluation result section, the Confirm button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack5.png)

1. A message is displayed that the evaluation acknowledgement is **Completed**, as shown in the following image.   
![\[A message that the evaluation is successfully acknowledged.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack6.png)

1. You can only acknowledge an evaluation after it is submitted. If an evaluation is re-submitted, it again becomes eligible for acknowledgement.

1. To view the acknowledgement note, select the acknowledged evaluation, and then choose the **view note** link.  
![\[The Acknowledgement note.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack7.png)

# Random sampling of contacts for evaluation in Amazon Connect
<a name="random-sampling-of-contacts-for-evaluation"></a>

 Amazon Connect provides managers with a random sample of their agents’ contacts for evaluation, removing manager bias and streamlining the evaluation process. On Contact Search, managers can specify the number of contacts that they need to evaluate for each agent, as per union agreements, regulation or internal guidelines. They then receive the required number of contacts, randomly selected from the specified timeframe, for example, 3 contacts per agent from the last week. In addition, managers can apply additional filters within Contact Search to ensure that the provided contacts are suitable for evaluation. For example, contacts must be longer than 180 seconds, have an associated audio or screen recordings, transcripts and have not yet been evaluated. Once the sample is generated, you can select an evaluation form and create draft evaluations in bulk for each of the contacts within the sample. Evaluations created in this way will denote that the contact was selected through random sampling, and provide auditability to ensure that the filter criteria did not introduce any bias in selection. 

**Random sampling of contacts for evaluation**

1.  Login to Amazon Connect with a user, who has the following set of permissions on their security profile: 

   1.  Contact Search - View 

   1.  Sample contacts 

   1.  Evaluation forms – perform evaluations 

1. Select the timeframe of contacts for evaluation, such as trailing week. Note that you can sample contacts from a maximum period of 5 weeks.  
![\[Select timeframe\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-time-range.png)

1. Select the agent or agent hierarchy that you need to evaluate.  
![\[Filter search - Agent\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-agent-filter.png)  
![\[Add filter - Agent\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-agent-filter-select.png)

1. Apply any additional filters to select only those contacts that are suitable for evaluation.
   + **Conversational analytics**: Ensures that the contact is analyzed by conversational analytics and has a transcript
   + **Recording**: Filter contacts with audio recording (voice) or screen recording (video)
   + **Interaction Duration**: You can choose contacts with a minimum and maximum agent-customer interaction
   + **Evaluation Status**: Only select contacts that have not yet been evaluated  
![\[Add additional filters\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-search-filters.png)

1. Specify the sampling criteria, such as 5 contacts per agent and click **apply** to generate a sample.  
![\[Sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-criteria.png)

1. You can save the set of filters and sampling criteria within saved search.  
![\[Save filters and sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-save-search.png)![\[Save filters and sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-save-search-name.png)![\[Save filters and sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-save-search-banner.png)

1. Once the sample is generated, you can create draft evaluations in bulk across all the contacts.
   + Select **Create Draft Evaluations**
   + Select the **Evaluation Form**  
![\[Create draft evaluations\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-create-draft-eval-empty.png)  
![\[Select evaluation form\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-create-draft-eval-form-select.png)

   This associates the draft evaluations with the sample name.
**Note**  
This step is required if you need to retrieve the contact sample in the future.  

![\[Creating draft evaluations\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-in-progress-banner.png)


![\[Draft evaluations successfully created\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-success-banner.png)


## Retrieving and viewing sampled contacts for evaluation
<a name="retrieve-and-view-sampled-contacts-for-evaluation"></a>

 To retrieve the contact sample in the future, go to Contact Search and apply the filter Evaluation – contact samples. Note that contact samples are specific to the user that generated the sample. 

![\[Create draft evaluations\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-contact-samples-filter.png)


## Auditing sampling criteria
<a name="auditing-sampling-criteria"></a>

 If you open an evaluation, it will indicate if contact sampling was used to create the evaluation. You can click **Yes** to audit the filter criteria used to generate the contact sample, ensuring that filters did not introduce any bias (e.g., negative customer sentiment) during the contact selection process. 

![\[Create draft evaluations - contact details\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-evals-list.png)


![\[Create draft evaluations - evaluation overview\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-sampled-eval.png)


![\[Create draft evaluations - contact sample details\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-sampled-eval-details.png)


# Request reviews of (appeal) performance evaluations in Amazon Connect
<a name="evaluation-review-requests"></a>

When an agent performance evaluation is submitted, you can automatically notify the agent to review their evaluation. For example, you can set up a [rule to send an email](contact-lens-rules-email.md) to the agent when an evaluation is available. Once they have reviewed an evaluation, they can [acknowledge](acknowledge-evaluations.md) the evaluation. If they disagree with the feedback within an evaluation, they can request a review of (appeal) performance evaluations. When a review is requested, designated managers are automatically notified via email. They can then revise the evaluation or add additional notes that justify the original evaluation, before completing the review. Upon completion, the user who had requested the review and the agent evaluated is notified via email.

## How do I enable review requests (appeals)?
<a name="enable-review-requests"></a>

Amazon Connect enables you to specify which evaluation forms support review requests. To enable review requests on an evaluation form:

1. Log in to Amazon Connect with a user account that has the following security profile permission: **Analytics and Optimization** - **Evaluation forms - manage form definitions** - **Create**

1. Choose **Analytics and optimization**, then choose **Evaluation forms**.

1. Open an existing form by clicking on the hyperlink for the Last version or create a new evaluation form.

1. Click on the **Additional settings** tab

1. Click **Allow review requests**

1. You can specify the time window till when a review can be requested on an evaluation. The time window is measured from the time of the original submission of an evaluation.

1. You can also choose one or more recipients who will be notified via email when a review is requested. The email has a link to the contact with the evaluation for which a review is requested. Note that in order for the users to receive emails on a SAML authenticated instance, the secondary email needs to be provided within the user's profile in Connect.

1. Once you **Activate** the form, subsequent evaluations performed using the form will support review requests.

![\[Additional settings tab showing Allow review requests option\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-enable.png)


## Who can request reviews of an evaluation?
<a name="who-can-request-reviews"></a>

For users to request reviews of evaluations, they should have the permissions: **Evaluation forms - request evaluation reviews - Create and View**, in addition to access to the underlying contacts and evaluations. Permissions to request reviews can be granted to agents, or their supervisors, who can request evaluation reviews from the quality management team on the behalf of their agents. Supervisors granted the permission to **request evaluation reviews** can request review on any evaluation that they can access.

Users granted the permission **Evaluation forms - request evaluation reviews - Delete** permission can delete a request before the review has started.

## Who can review an evaluation?
<a name="who-can-review-evaluations"></a>

Users with the permission **Evaluation forms - review evaluations - Create and View** permissions can perform reviews. If certain personas need to be consulted on reviews, but should not be granted permissions to perform reviews themselves, then you can grant them **Evaluation forms - review evaluations - View** permissions.

## Requesting a review
<a name="requesting-review"></a>

1. On the **Contact details** page, open a completed evaluation for which you want to request a review

1. Select **request a review** at the bottom of the evaluation

1. Explain why you are requesting a review (you cannot leave this blank). Click **confirm**

1. The evaluation will show under **Review requested** on the evaluations pane

1. You can cancel a request if the review is yet to be started

![\[Request a review button on evaluation\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-request.png)


![\[Request review dialog with explanation field\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-requestcomment.png)


![\[Evaluation showing Review requested status\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-requested.png)


## Searching for pending reviews
<a name="searching-pending-reviews"></a>

As mentioned above, you can configure in the evaluation form, who would be automatically notified via email if a review is requested. These notification emails contain links to contacts with evaluations for which a review is requested. Additionally, users with appropriate permissions can search for contacts with evaluations for which a review is requested or which are already under review:

1. Log in to Amazon Connect with a user account that has [permissions to access contact records](contact-search.md#required-permissions-search-contacts) and the **Evaluation forms - perform evaluations** permission.

1. On the navigation bar, choose **Analytics and optimization**, **Contact search**.

1. Use the time range filter to search for contacts from the relevant time window, e.g. last month.

1. Use the evaluation status filter with the value **Review requested** to search for contacts with evaluations where a review has been requested, and is yet to be picked up for review

1. Use the evaluation status filter with the value **Under review** to search for contacts with evaluations that are picked up for review

![\[Contact search with evaluation status filter\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-searchrequested.png)


## Starting and completing reviews
<a name="starting-completing-reviews"></a>

1. Open the evaluations pane on the **Contact details** page.

1. Click on an evaluation listed under **Review requested**.

1. Click **Start review**.

1. The original evaluation is listed below **Under review** and can be viewed by clicking on it.

1. The in-progress review is listed under **Evaluation reviews**. Users with the **Evaluation forms - review evaluations - Create** permissions can make edits to the evaluation such as changing answers, amending the notes. You can **Save** your review at anytime and click **Resolve review** to finalize the review.

1. This will send an automated email notification to the user who had requested the review.

![\[Evaluation review in progress\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-view.png)


# Search for and view evaluations in Amazon Connect
<a name="search-evaluations"></a>

Users can search for evaluated contacts and view evaluations side-by-side alongside audio or screen recordings, conversation transcripts, summaries and insights. 

## Searching for evaluated contacts
<a name="w2aac32c11c41b5"></a>

1. Log in to Amazon Connect with a user account that has [permissions to search for and view contacts](contact-search.md#required-permissions-search-contacts) and either the **Evaluation forms - perform evaluations** or **Evaluation forms – view my received evaluations** permission. 

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**. 

1. Use the filters on the page to narrow your search. For the date selection, you can search for up to 8 weeks of contacts at a time. You can review contacts and associated evaluations from up to 2 years ago.  
![\[The search filters for evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-searchfilters1.png)

1. Click on the contact ID in the search results to open a contact and review associated evaluations.

## View evaluations on the contact details page
<a name="w2aac32c11c41b7"></a>

1. Click on **Evaluations** on the top right of the page.

1. If the contact has received an evaluation it will show under **Completed**.

1. Click on the evaluation to review the completed evaluation.

![\[The list view of evaluations on the contact details page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-listView.png)


![\[The detail view of an evaluation on the contact details page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-detailView.png)


# Use a reference ID to represent questions in a report about contact center agent performance
<a name="evaluationforms-referenceid"></a>

A *reference ID* is a token that appears in the JSON output file. It represents a specific question. When building reports, you can use it in place of the exact wording of a question. 

For example, a question might be "Did agents stick to the script?" but the next day the question might be changed to "Was there good script adherence?" Regardless of how the question is worded, the reference ID always stays the same.

# Evaluation metrics in Amazon Connect
<a name="evaluation-metrics"></a>

You can view the following metrics on the [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md). These metrics enable you to view aggregated agent performance, and get insights across agent cohorts and over time. 

## Average evaluation score
<a name="average-evaluation-score-hmetric"></a>

This metric provides the average evaluation score for all submitted evaluations. Evaluations for calibrations are excluded from this metric.

The average evaluation score corresponds to the grouping. For example, if the grouping contains evaluation questions, then the average evaluation score is provided for the questions. If the grouping does not contain evaluation form, section or question, then the average evaluation score is at an evaluation form level.

**Metric type**: Percent

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_EVALUATION_SCORE`

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Get sum of of evaluation scores: forms \$1 sections \$1 questions.
+ Get total number of evaluations where scoring has been completed and recorded.
+ Calculate average score: (sum of scores) / (total evaluations).

**Notes**:
+ Excludes calibration evaluations. 
+ Score granularity depends on grouping level. 
+ Returns percentage value. 
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups. 
+ Based on submitted evaluation timestamp. 
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

## Average weighted evaluation score
<a name="average-weighted-evaluation-score-hmetric"></a>

This metric provides the average weighted evaluation score for all submitted evaluations. Evaluations for calibrations are excluded from this metric.

The weights are per the evaluation form version that was used to perform the evaluation. 

 The average evaluation score corresponds to the grouping. For example, if the grouping contains evaluation questions, then the average evaluation score is provided for the questions. If the grouping does not contain evaluation form, section or question, then the average evaluation score is at an evaluation form level. 

**Metric type**: Percent

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_WEIGHTED_EVALUATION_SCORE`

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Get sum of weighted scores using form version weights.
+ Get total number of evaluations where scoring has been completed and recorded.
+ Calculate weighted average: (sum of weighted scores) / (total evaluations).

**Notes**:
+ Uses evaluation form version-specific weights. 
+ Excludes calibration evaluations. 
+ Score granularity depends on grouping level. 
+ Returns percentage value. 
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups. 
+ Based on submitted evaluation timestamp. 
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

## Automatic fails percent
<a name="percent-evaluation-automatic-failures-hmetric"></a>

This metric provides the percentage of performance evaluations with automatic fails. Evaluations for calibrations are excluded from this metric. 

If a question is marked as an automatic fail, then the parent section and the form is also marked as an automatic fail. 

**Metric type**: Percent

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Get total automatic fails count.
+ Get total evaluations performed.
+ Calculate percentage: (automatic fails / total evaluations) \$1 100.

**Notes**:
+ Automatic fail cascades up (question → section → form).
+ Excludes calibration evaluations.
+ Returns percentage value.
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups.
+ Based on submitted evaluation timestamp.
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

## Evaluations performed
<a name="evaluations-performed-hmetric"></a>

This metric provides the number of evaluations performed with evaluation status as "Submitted." Evaluations for calibrations are excluded from this metric.

**Metric type**: Integer

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `EVALUATIONS_PERFORMED`

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Check evaluationId present?
+ Verify itemType is form.
+ Count submitted evaluations (excluding calibrations).

**Notes**:
+ Counts only submitted evaluations.
+ Excludes calibration evaluations.
+ Returns integer count.
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups.
+ Based on submitted evaluation timestamp.
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

# Agent evaluation form output in Amazon Connect
<a name="evaluationforms-example-output-file"></a>

This section shows the export output path for evaluations, provides an example of evaluation form scores, and describes the evaluation form metadata.

**Topics**
+ [Verify your S3 bucket](#verify-evaluation-s3bucket)
+ [Example output locations](#example-evaluationform-output-locations)
+ [Known issue](#release-note-evaluation-output)
+ [Example scores](#example-evaluation-output-file)
+ [Evaluation form metadata definitions](#evaluation-form-metadata)
+ [Sample exported evaluation](#exported-evaluation)

## Verify your S3 bucket
<a name="verify-evaluation-s3bucket"></a>

When you enable **Contact evaluations** in the Amazon Connect console, you are prompted to create or choose an S3 bucket to store the evaluations. To verify the name of the bucket, go to your instance alias, choose **Data storage**, **Contact evaluations**, then **Edit**.

## Example output locations
<a name="example-evaluationform-output-locations"></a>

Following is the output file path for evaluation forms:
+ *contact\$1evaluations\$1S3\$1bucket*/Evaluations/*YYYY/MM/DD/hh:mm:ss.sTZD*-*evaluation\$1id*.json

For example:

`amazon-connect-s3/Evaluations/2022/04/14/05:04:20.869Z-11111111-2222-3333-4444-555555555555.json`

## Known issue: Two output files for the same evaluation
<a name="release-note-evaluation-output"></a>

Contact Lens generates two output files for the same evaluation form.
+ One file is written to the new default S3 path. You can configure the path in the AWS console.
+ Another file, which will be deprecated, is written to a different, previous S3 path. You can disregard this file.

  The previous S3 path looks like the following:
  + *s3\$1bucket*/Evaluations/contact\$1*contactId*/evaluation\$1*evaluationId*/YYYY-MM-DDThh:mm:ss.sTZD.json

## Example scores
<a name="example-evaluation-output-file"></a>

The following example shows a typical score.

```
{
  "schemaVersion": "3.5",
  "evaluationId": "fb90de35-4507-479a-8b57-970290fd5c2c",
  "metadata": {
    "contactId": "badd4896-75f7-43b3-bee6-c617ed3d04cb",
    "accountId": "874551140838",
    "instanceId": "8f753c94-9cd2-4f16-85eb-945f7f0d559a",
    "agentId": "286bcec0-e722-4166-865f-84db80252218",
    "evaluationDefinitionTitle": "Compliance Evaluation Form",
    "evaluator": "jane",
    "evaluationDefinitionId": "15d8fbf1-b4b2-4ace-869b-82714e2f6e3e",
    "evaluationDefinitionVersion": 2,
    "evaluationStartTimestamp": "2025-11-14T17:57:08.649Z",
    "evaluationSubmitTimestamp": "2025-11-14T17:59:29.052Z",
    "score": {
      "percentage": 100
    },
    "creator": "jane.doe@acme.com",
    "autoEvaluated": false,
    "resubmitted": false,
    "evaluationSource": "ASSISTED_BY_AUTOMATION",
    "evaluationType": "CONTACT_EVALUATION",
    "evaluationAcknowledgerComment": "The Acknowledgment comment",
    "evaluationAcknowledgedTimestamp": "2025-12-22T05:20:39.297Z",
    "evaluationAcknowledgedByUserName": "john",
    "evaluationAcknowledgedByUserId": "286bcec0-e722-4166-865f-84db80252218"
  },
  "sections": [
    {
      "sectionRefId": "s1a1b58d6",
      "sectionTitle": "The title of the section",
      "notes": "Section note",
      "score": {
        "percentage": 100
      }
    },
    {
      "sectionRefId": "s46661c49",
      "sectionTitle": "The title of the subsection",
      "parentSectionRefId": "s1a1b58d6",
      "score": {
        "percentage": 100
      }
    }
  ],
  "questions": [
    {
      "questionRefId": "q570b206a",
      "sectionRefId": "s46661c49",
      "questionType": "NUMERIC",
      "questionText": "How do you rate the contact between 1 and 10?",
      "answer": {
        "value": "",
        "notes": "Add more information here",
        "metadata": {
          "notApplicable": true
        }
      },
      "score": {
        "notApplicable": true
      }
    },
    {
      "questionRefId": "q73bc5b9d",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent introduce themselves?",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "o6999aa94",
            "selected": true
          },
          {
            "valueText": "No",
            "valueRefId": "o284e4d9e",
            "selected": false
          },
          {
            "valueText": "Maybe",
            "valueRefId": "o1b2f0a14",
            "selected": false
          }
        ],
        "notes": "Add more information here",
        "automation": {
          "status": "SYSTEM_ANSWER",
          "systemSuggestedValue": "Yes"
        },
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 100
      }
    },
    {
      "questionRefId": "h89bc7a9t",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent offer a promotion?",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "p7888bb85",
            "selected": false
          },
          {
            "valueText": "No",
            "valueRefId": "p395f5e8f",
            "selected": true
          },
          {
            "valueText": "Maybe",
            "valueRefId": "p2c3g1b25",
            "selected": false
          }
        ],
        "notes": "Add more information here",
        "assistedSuggestion": {
          "value": "No. A promotion was not offered by the agent."
        },
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 100
      }
    },
    {
      "questionRefId": "qc2effc9d",
      "sectionRefId": "s46661c49",
      "questionType": "TEXT",
      "questionText": "Describe the outcome.",
      "answer": {
        "value": "Example answer text",
        "notes": "Add more information here",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 50
      }
    }
  ]
}
```

## Evaluation form metadata definitions
<a name="evaluation-form-metadata"></a>

The following list describes the fields in the evaluation form.

**evaluationId**  
A unique identifier for the contact evaluation  
*Type* – String  
*Length constraints* – Minimum length of 1. Maximum length of 500

**metadata**    
**contactId**  
The identifier of the contact in this instance of Amazon Connect.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 256  
**accountId**  
The identifier of AWS account running the instance of Amazon Connect.  
*Type* – String  
*Length constraints* – Constraints: 12 digits  
*Pattern* – `^\d{12}$`  
**instanceId**  
The identifier of the Amazon Connect instance. You can [find the instance ID](find-instance-arn.md) in the Amazon Resource Name (ARN) of the instance.  
*Length constraints* – Minimum length of 1, maximum length of 100  
**agentId**  
The identifier of the agent who performed the contact.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 500  
**evaluationDefinitionTitle**  
The title of the evaluation form.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 128  
**evaluator**  
Name of the user who last updated the evaluation.  
*Type* – String  
**evaluationDefinitionId**  
The unique identifier for the evaluation form.  
*Type* – String  
*Length contraints* – Minimum length of 1, maximum length of 500  
**evaluationDefinitionVersion**  
The version of the evaluation form.  
*Type* – Integer  
*Valid range* – Minimum value of 1  
**evaluationStartTimestamp**  
The evaluation's creation timestamp.  
*Type* – Timestamp  
*Example* – 2025-11-14T17:57:08.649Z  
**evaluationSubmitTimestamp**  
The evaluation's submission timestamp.  
*Type* – Timestamp  
*Example* – 2025-11-14T17:59:29.052Z  
**score**  
The evaluation's score.  
**creator**  
 The entity that created the evaluation the very first time (as opposed to "evaluator" which represents the entity that last submitted the evaluation). When the call is made from the Amazon Connect admin website it contains the username. Wen the call comes from the API it contains the ARN of the caller.   
*Type* – String  
**autoEvaluated **  
 Indicates whether the evaluation was submitted using fully automated evaluations.  
*Type* – Boolean  
**resubmitted **  
 Indicates whether the evaluation has been re-submitted (edited and submitted again).  
*Type* – Boolean  
**evaluationSource **  
The type of evaluation answer source.  
*Type* – String  
Valid values:  
+ `ASSISTED_BY_AUTOMATION` - indicates that [question automation](create-evaluation-forms.md#step-automate) was used to answer some of the questions.
+ `MANUAL` - indicates that the evaluation was performed manually.
+ `AUTOMATED` - indicates that the evaluation was submitted using fully automated evaluations (see "autoEvaluated" field).  
**evaluationType**  
The type of evaluation.  
*Type* – String  
Valid values:  
+ `CONTACT_EVALUATION` - evaluation of a contact.  
**calibrationSessionId**  
The identifier of the calibration session associated with this evaluation.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 500  
**evaluatedParticipantId**  
The identifier of the participant being evaluated.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 256  
**evaluatedParticipantRole**  
The role of the participant being evaluated.  
*Type* – String  
Valid values:  
+ `AGENT` - the agent participant.
+ `CUSTOMER` - the customer participant.
+ `SYSTEM` - the system participant.  
**acknowledgerComment**  
Comment left by the user who acknowledged the evaluation.  
*Type* – String  
*Length constraints* – Minimum length of 0, maximum length of 3072  
**evaluationAcknowledgedByUserId**  
The identifier of the person who acknowledged the evaluation.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 500  
**evaluationAcknowledgedByUserName**  
The name of the person who acknowledged the evaluation.  
*Type* – String  
**evaluationAcknowledgedTimestamp**  
The evaluation's acknowledgment timestamp.   
*Type* – Timestamp  
*Example* – 2025-12-24T15:45:56.662Z

**sections**  
Array of the sections of the evaluation.    
**sectionRefId**  
The identifier of the section. An identifier must be unique within the evaluation form.   
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40  
**parentSectionRefId**  
The identifier of the parent section.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40  
**sectionTitle**  
The title of the section.  
*Type* – String  
*Length constraints* – Constraints: Minimum length of 0, maximum length of 128  
**notes**  
The notes left for the section.  
*Type* – String  
*Length constraints* – Minimum length of 0, maximum length of 3072  
Notes have the following limits:  
+ Individual notes have a limit of 3072 characters. 
+ The combined notes in an evaluation have a limit of *N* x 1024 characters, where *N* is the number of questions in the evaluation.  
**score**  
The score for the section.    
**percentage**  
The score percentage for an item in a contact evaluation.  
*Type* – Double  
*Valid range* – Minimum value of 0, maximum value of 100  
**automaticFail**  
The flag that marks the item as automatic fail. If the item or a child item gets an automatic fail answer, this flag will be true.  
*Type* – Boolean  
**notApplicable**  
The flag that marks the item as automatic fail. If the item or a child item gets an automatic fail answer, this flag will be true.  
*Type* – Boolean

**questions**  
Array of the questions of the evaluation.    
**questionRefId**  
The identifier of the question. An identifier must be unique within the evaluation form.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40.  
**sectionRefId**  
The identifier of the parent section.   
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40  
**questionType**  
The type of the question.  
*Type* – StrThe combined notes in an evaluation have a limit of *N* x 1024 characters, where *N* is the number of questions in the evaluation.ing  
*Valid values* – `TEXT | SINGLESELECT | NUMERIC`  
**questionText**  
The title of the question.  
*Type* – String  
*Length constraints* – Minimum length of 0, maximum length of 350  
**answer**  
The answer for the question.    
**value**  
The string/numeric value for an answer in a contact evaluation.  
*Type* – String/Double  
*Length constraints* – String: Minimum length of 0, maximum length of 128  
**notes**  
The notes left for the section.  
*Type* – String  
*Length constraints* – Minimum length of 0. Maximum length of 3072  
Notes have two character limits. Individual notes have a limit of 3072 characters. The combined notes in an evaluation have a limit of N x 1024 characters, where N is the number of questions in the evaluation.  
**metadata**  
**notApplicable **  
Flag that marks the question as not applicable.  
*Type* – Boolean  
**assistedSuggestion**  
Answer suggested by the [generative AI](generative-ai-performance-evaluations.md).  
*Type* – String  
**automation**    
**status**  
The status of the automation answer.  
*Type* – String  
*Valid values* – `UNAVAILABLE | SYSTEM_ANSWER | OVERRIDDEN_ANSWER`  
**systemSuggestedValue**  
The string or numeric value for an automation answer in a contact evaluation.  
*Type* – String or Double  
*Length constraints* – String: Minimum length of 0, maximum length of 128  
**score**  
The [score](#score) for the question.  
+ automaticFail - The flag that marks the item as critical for the form and the full form will fail (marked with zero score) when the item fails. If the item or a child item gets an automatic fail answer, this flag will be true and the full form will also fail.

  *Type* – Boolean
+ notApplicable - The flag that mark the item as not applicable for scoring, it will be excluded from scoring calculations.

  *Type* – Boolean

## Sample exported evaluation
<a name="exported-evaluation"></a>

The following example shows a typical exported evaluation.

```
{
  "schemaVersion": "3.5",
  "evaluationId": "fb90de35-4507-479a-8b57-970290fd5c2c",
  "metadata": {
    "accountId": "874551140838",
    "instanceId": "8f753c94-9cd2-4f16-85eb-945f7f0d559a",
    "contactId": "badd4896-75f7-43b3-bee6-c617ed3d04cb",
    "agentId": "286bcec0-e722-4166-865f-84db80252218",
    "evaluationDefinitionTitle": "Legal Compliance Evaluation Form",
    "evaluator": "jane",
    "evaluationDefinitionId": "15d8fbf1-b4b2-4ace-869b-82714e2f6e3e",
    "evaluationDefinitionVersion": 2,
    "evaluationStartTimestamp": "2022-11-14T17:57:08.649Z",
    "evaluationSubmitTimestamp": "2022-11-14T17:59:29.052Z",
    "score": {
      "percentage": 85
    },
    "autoEvaluated": false,
    "creator": "john",
    "resubmitted": false,
    "evaluationSource": "ASSISTED_BY_AUTOMATION",
    "evaluationType": "CONTACT_EVALUATION",
    "calibrationSessionId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "evaluationAcknowledgedByUserId": "286bcec0-e722-4166-865f-84db80252218",
    "evaluationAcknowledgedByUserName": "mike",
    "evaluationAcknowledgedTimestamp": "2022-12-24T15:45:56.662Z",
    "evaluationAcknowledgerComment": "Manager walked through the evaluation during coaching",
    "evaluatedParticipantId": "participant-123",
    "evaluatedParticipantRole": "AGENT"
  },
  "sections": [
    {
      "sectionRefId": "s1a1b58d6",
      "sectionTitle": "Communication Skills",
      "notes": "Overall communication was professional",
      "score": {
        "percentage": 90
      }
    },
    {
      "sectionRefId": "s46661c49",
      "sectionTitle": "Greeting and Introduction",
      "parentSectionRefId": "s1a1b58d6",
      "notes": "Agent followed proper greeting protocol",
      "score": {
        "percentage": 100
      }
    }
  ],
  "questions": [
    {
      "questionRefId": "q570b206a",
      "sectionRefId": "s46661c49",
      "questionType": "NUMERIC",
      "questionText": "How many times did agent interrupt the customer",
      "answer": {
        "value": "2",
        "notes": "Interruptions were minimal and appropriate",
        "metadata": {
          "notApplicable": false,
          "automation": {
            "status": "OVERRIDDEN_ANSWER",
            "systemSuggestedValue": "3"
          }
        }
      },
      "score": {
        "percentage": 80
      }
    },
    {
      "questionRefId": "q73bc5b9d",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent introduce themselves?",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "o6999aa94",
            "selected": true
          },
          {
            "valueText": "No",
            "valueRefId": "o284e4d9e",
            "selected": false
          },
          {
            "valueText": "N/A",
            "valueRefId": "system_default_null_value",
            "selected": false
          }
        ],
        "notes": "Agent provided clear introduction with name and department",
        "metadata": {
          "notApplicable": false,
          "assistedSuggestion": {
            "value": "The agent introduced themselves at the beginning of the call."
          }
        }
      },
      "score": {
        "percentage": 100
      }
    },
    {
      "questionRefId": "h89bc7a9t",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent ask for consent to perform a credit check",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "o6999aa94",
            "selected": false
          },
          {
            "valueText": "No",
            "valueRefId": "o284e4d9e",
            "selected": true
          },
          {
            "valueText": "N/A",
            "valueRefId": "system_default_null_value",
            "selected": false
          }
        ],
        "notes": "Agent failed to obtain consent before credit check",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 0,
        "automaticFail": true
      }
    },
    {
      "questionRefId": "qc2effc9d",
      "sectionRefId": "s46661c49",
      "questionType": "MULTISELECT",
      "questionText": "What topics were discussed during the call",
      "answer": {
        "values": [
          {
            "valueText": "Account balance",
            "valueRefId": "topic_balance",
            "selected": true
          },
          {
            "valueText": "Payment options",
            "valueRefId": "topic_payment",
            "selected": true
          },
          {
            "valueText": "Account closure",
            "valueRefId": "topic_closure",
            "selected": false
          }
        ],
        "notes": "Customer inquired about balance and payment plans",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "notApplicable": true
      }
    },
    {
      "questionRefId": "q8a9b0c1d",
      "sectionRefId": "s46661c49",
      "questionType": "TEXT",
      "questionText": "What was your general impression about the customer's satisfaction",
      "answer": {
        "value": "The customer seemed satisfied with the resolution and thanked the agent",
        "notes": "Positive customer sentiment throughout the call",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "notApplicable": true
      }
    },
    {
      "questionRefId": "q2b3c4d5e",
      "sectionRefId": "s46661c49",
      "questionType": "DATETIME",
      "questionText": "What time was the follow-up scheduled",
      "answer": {
        "value": "2024-04-16T14:30:00+01:00",
        "notes": "Follow-up appointment confirmed with customer",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "notApplicable": true
      }
    }
  ]
}
```

# Monitor performance evaluation failure events
<a name="performance-evaluation-events"></a>

You can monitor failures of automated evaluations as well as S3 exports of contact evaluations using EventBridge and CloudWatch. You can use these events to investigate and fix failures. The following guide is a walk through of the process of creating custom EventBridge rules to monitor performance evaluation failure events.

## Step-by-step guide
<a name="performance-evaluation-events-guide"></a>

This is a guide on how to create an EventBridge rule to log Amazon Connect failed auto-evaluation submission events and failed S3 exports of contact evaluations in your AWS console.

1. Log into your AWS account and navigate to the EventBridge console. Choose **Rules** under the **Buses** section.  
![\[The Rules tab under the Buses section in the EventBridge console.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-rules-tab.png)

1. Choose **Create rule** with the default Event bus selected.  
![\[The Create rule button with the default Event bus selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-create-rule.png)

1. Give the rule a name and select **Rule with an event pattern** for the Rule type. Choose **Next**.  
![\[The rule name and Rule with an event pattern option selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-rule-name.png)

1. With **AWS events or EventBridge partner events** selected under **Events**, select the **Use pattern form** option under **Event pattern**. This is where you will define the pattern to match for triggering the rule.

1. Type and select **Amazon Connect** under the **AWS service** dropdown to narrow down the event types. Select the desired event type in the dropdown below. Choose **Next** once the pattern is set up.

   To subscribe to EventBridge event types, create a custom EventBridge rule that matches the following:
   + `"source"` = `"aws.connect"`
   + `"detail-type"` can be one of the following:
     + `"Contact Lens Automated Evaluation Submission Failed"`
     + `"Contact Lens Evaluation Export Failed"`  
![\[The event pattern with Amazon Connect selected as the AWS service.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-event-pattern.png)

1. The next step allows you to configure the target(s) to process/receive the matched events. For simplicity, select the **CloudWatch log group** option under **Select a target** and choose a log group.

1. Choose **Next** and advance to the final **Review and create** step. Choose **Create rule** once more to complete the rule creation process.

1. Now, if the rule is in the **Enabled** state and a matching event occurs, corresponding logs should show up in the configured CloudWatch log group with the relevant IDs under the metadata section and the failure reason under the data section.  
![\[CloudWatch log group showing matched EventBridge events.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-cloudwatch-log-group.png)  
![\[CloudWatch log detail showing metadata and failure reason.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-cloudwatch-log-detail.png)

## Example EventBridge payload
<a name="performance-evaluation-events-payload"></a>

The following is an example EventBridge payload when the rule is matched:

```
{  
  "version": "0",  
  "id": "00005435-d12d-c93b-d9d2-b64cba85fbb6",
  "detail-type": "Contact Lens Automated Evaluation Submission Failed",  
  "source": "aws.connect",  
  "account": "Your AWS account ID",  
  "time": "2025-10-02T10:34:56Z",  
  "region": "us-west-2",
  "resources": [],  
  "detail": {  
    "version": "1.0.0",  
    "metadata": {  
      "contactId": "4266f8e9-8420-4ee7-96cd-515d2edae1f2",
      "instanceId": "d9b0b09d-7dab-47e5-9f82-d6787fbc068c",
      "formId": "8b1365bd-1415-41a9-a491-af226e1bda4e"
    },  
    "data": {  
      "reasonCode": "ANALYSIS_FILE_ERROR",
      "message": "Automated contact evaluation submission failed due to an error when searching/retrieving/parsing the analysis file."
    }  
  }  
}
```

## Common errors
<a name="performance-evaluation-events-errors"></a>

The following errors may occur when the system eventually fails to process evaluations after multiple retry attempts.

### Automated evaluation submission errors
<a name="automated-evaluation-submission-errors"></a>


| Error | Error message | 
| --- | --- | 
| AUTOMATED\$1SUBMISSION\$1FAILED | Automated contact evaluation submission failed because some of the questions could not be answered. Please verify the evaluation form and/or the Amazon Connect rule configurations. | 
| ANALYSIS\$1FILE\$1ERROR | Automated contact evaluation submission failed due to an error when searching/retrieving/parsing the analysis file. | 
| INTERNAL\$1SERVER\$1ERROR | Automated contact evaluation submission failed due to an internal server error. Please expect delayed processing. | 
| QUOTA\$1EXCEEDED\$1ERROR | Automated contact evaluation submission failed because the remaining quota for using Gen AI to automatically answer evaluation questions for the contact is insufficient. | 

### Evaluation S3 export errors
<a name="evaluation-s3-export-errors"></a>


| Error | Error message | 
| --- | --- | 
| S3\$1BUCKET\$1ACCESS\$1DENIED | Contact evaluation JSON export failed due to insufficient permissions. | 
| S3\$1STORAGE\$1NOT\$1CONFIGURED | The export S3 bucket is not configured for your instance. | 
| INTERNAL\$1SERVER\$1ERROR | Contact evaluation JSON export failed due to an internal server error. Please expect delayed delivery of the export file. | 

# Calibration sessions for performance evaluations
<a name="calibrations-performance-evaluations"></a>

Amazon Connect Contact Lens enables you to conduct calibration sessions to drive consistency and accuracy in how managers evaluate agent performance, so that agents receive feedback that is consistent. During a calibration, multiple managers can evaluate the same contact using the same evaluation form. You can then review differences in evaluations filled by different managers to align managers on evaluation best practices and identify opportunities to improve the evaluation form, e.g. rephrasing an evaluation question to be more specific, so that it is consistently answered by managers. You can also compare manager’s answers with a designated expert, to measure and improve manager accuracy on evaluating agent performance. The expert is usually the quality manager who is conducting the calibration session.

## Permissions needed for calibrations
<a name="calibrations-performance-evaluations-permissions"></a>

You need the following permissions for calibrations:
+ **Creating calibration sessions:** Add the permission **Evaluation forms - manage calibration sessions** to the security profiles of the set of users that should be permitted to conduct calibration sessions for performance evaluations.
+ **Participating in a calibration session:** Any user who has the permission to perform evaluations, namely **Evaluation forms - perform evaluations**, can participate in a calibration session if they are added as one of the participants.

In addition, for both sets of users, you also need permissions to search and view contacts. For more information, see [Manage who can search for contacts and access detailed information](contact-search.md#required-permissions-search-contacts).

## Create a calibration session
<a name="calibrations-performance-evaluations-create"></a>

**To create a calibration session**

1. Login to Amazon Connect with a user account that has the necessary permissions within their security profile.

1. On the left nav, go to **Analytics and optimization, Contact search**.

1. Search for a contact that you wish to perform calibrations on, for example, minimum interaction duration, specific queue, etc.

1. On the **Contact details** page of a contact, choose **Evaluations** on the top right to open the **Evaluations** side panel.

1. In the side panel, select the **Calibration session** radio button, choose the desired form for the calibration using the dropdown menu, and then choose the **Setup calibration session** button.  
![\[A diagram of the calibrations session setup.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibrations-setup1.png)

1. Enter a title for the calibration session, select the participants, and optionally designate an expert participant and set a due date.  
![\[A diagram of the calibrations session setup with participants and due date.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-setup2.png)

1. After creation, the calibration session will appear in the side panel. An evaluation will be automatically generated for each participant.  
![\[A diagram of the created calibrations session for each participant.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-setup3.png)

## Edit a calibration session
<a name="calibrations-performance-evaluations-edit"></a>

**To edit a calibration session**

1. On the side panel locate the calibration sessions and choose **Edit**.  
![\[A diagram of choosing to edit a calibrations session.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibrations-edit1.png)

1. In the form that opens in the side panel you can modify the calibration session title, add or remove participants, optionally designate an expert participant, and set or adjust the due date.

1. Choose **Save** to update the calibration session. The changes will be reflected in the side panel. New participants will automatically receive an evaluation, while removed participants will have their evaluations deleted. 

## Perform evaluations as a part of a calibration session
<a name="calibrations-perform-evaluations"></a>

Use the following procedure to perform evaluations as a part of a calibration session:

**To perform evaluations**

1. On the side panel locate the **Calibration evaluations assigned to you** section to view your calibration evaluations.  
![\[A diagram of calibration evaluations assigned to you.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-evaluations1.png)

1. Choose an evaluation to open it. You can respond to these evaluations in the same manner as standard evaluations, with options to save your progress or submit the completed evaluation. Note that automation is disabled on calibration sessions.  
![\[A diagram of responding to calibration evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-evaluations2.png)

1. Calibration managers can access a list of all evaluations associated with a specific calibration session by viewing the calibration session details in the side panel. Calibration managers will also be able to view evaluations submitted by participants.

## Finalize a calibration
<a name="calibrations-finalize"></a>

**To finalize a calibration**

1. Access the calibration session details view and choose **Finalize**.  
![\[A diagram showing the finalize button for calibrations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibrations-finalize.png)

1. Confirm the finalization when prompted. Note that once finalized, neither the session nor its evaluations can be edited.

1. Within a few seconds, a calibration report will be available for download in .csv format. This report contains the answers of participants that have submitted evaluations, along with the weighted scores for each question, section and the overall form, evaluator notes and comparison of the evaluator’s scores with the expert evaluator.

   Use the field **absolute deviation from expert** (lower is better) for each participant to determine if an evaluator is significantly deviating from the expert while answering evaluation questions. You can also see **average absolute deviation from expert** (lower is better) to see if there are certain questions that get inconsistent answers from participants and need improvement (For example, better phrasing, more specific questions, etc.) 

## Finding calibration sessions
<a name="calibrations-find"></a>

Amazon Connect notifies users participating in calibration sessions via email (for example, if a user is added as a participant, if there is a change to the due date, etc.). If a user managing a calibration session has added themselves as the **expert** participant, then they would also receive emails. The email contains a link to the contact which is being used for calibration. Note that in order for users to receive email notifications, you need to assign emails to the users on Amazon Connect. For more information, see [Add users to Amazon Connect](user-management.md).

As a manager setting up a calibration, you can copy the contact ID to search for the contact on which the calibration session was setup. Note that if you have not added yourself as an expert or if user emails are not setup within Amazon Connect, you will not receive an email containing a link to the contact on which the calibration session was setup.

# Ingest agent activities from third-party applications to evaluate agent performance
<a name="evaluations-external-activities"></a>

You can import agent activities completed in third-party applications into Amazon Connect. These activities are imported as Amazon Connect tasks, which you can evaluate alongside work completed in Amazon Connect. This provides managers with a unified application for quality management.

To import activities completed in third-party applications (such as application processing or social media interactions) as completed tasks, use the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) API. When you import these activities, you can capture details relevant for performance evaluation as task attributes. Unlike tasks created in the Amazon Connect admin website, these imported tasks are already marked as completed and don't need to be accepted by the agent who completed the activity in the external application.

Managers can then evaluate these external activities alongside native Amazon Connect interactions and back-office tasks. This gives managers a unified view of agent performance in the [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md). 

## How to ingest activities from third-party applications
<a name="steps-for-it-admins"></a>

The following steps are typically performed by an IT admin.
+  Ensure that agents or back-office workers who you want to evaluate are users on Amazon Connect. To add new users, see [Add users to Amazon Connect](user-management.md). 
+ Use the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) API to ingest all external activities completed by these agents into Amazon Connect as completed Amazon Connect tasks. 

   You can ingest:
  + All activities completed in third-party applications (for example, triggered by the completion of these activities). This provides you with a comprehensive view of agent activities in a single application. 
  + A percentage of agents' external activities as a sample that you use for performance evaluation.

  Following is a sample API request for ingesting a claims authorization activity that was completed in another system.

  ```
  awscurl \
  --service connect \
  -X PUT \
  'https://connect.us-east-1.amazonaws.com/Prod/contact/create-contact' \
  --region us-east-1 \
  -d \
  '{
    "Channel":"TASK",
    "InstanceId":"8f3b9ab3-df68-4124-8573-2626b5c939ac", 
    "InitiationMethod":"API",
    "InitiateAs":"COMPLETED",
    "UserInfo": {"UserId": "arn:aws:connect:us-west-2:295154396770:instance/8f3b9ab3-df68-4124-8573-2626b5c939ac/agent/1c99b776-8e56-4aaa-a1bf-b950ffbe61e4"},
    "Name": "Processing Authorization #12345",
    "Description": "Customer Name: John Doe; Customer Condition: Asthma; Medication: Levocetrizin",
    "Attributes": {
      "Authorization": "12345",
      "ExternalContactType": "Authorization" 
    },
    "References": {
      "ThirdPartySystemURL": {
        "Type": "URL",
        "Value": "https://example.com/customer/12345"
      }
    }
  }'
  ```
+  You can add additional activity information within attributes. This information may be useful for quality managers who are searching and evaluating contacts. For example, the previous API call includes the a custom attribute called `ExternalContactType`. It enables managers to distinguish between different types of external activities within Contact search. 

   You can also add links to the third-party system within contact references. These links enable managers to reference additional information that's not included with the task. 
+  To enable managers to search for activities using these attributes, you need to enable search on these attributes. For more information, see [Search for contacts in Amazon Connect by using custom contact attributes or contact segment attributes](search-custom-attributes.md). 
**Note**  
Only tasks that are created after this setting is configured are searchable using these attributes.

## How to evaluate external activities
<a name="steps-for-managers"></a>

The following steps are typically performed by managers.

 Managers can evaluate ingested activities in Amazon Connect the same way that they evaluate native Amazon Connect contacts. For more information, see [Evaluate performance](evaluations.md).

 If your admin has configured search on custom contact attributes, you can search for external activities with identifiers, such as the type of activity and ID. 

The following image shows a search for `Completed` contacts, with `Attribute` = `ExternalContactType`.

![\[A contact search for completed contacts with Attribute = ExternalContactType.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluate-external-activities1.png)


The following image shows an example of what contact details look like for a completed external contact. In this image: 
+ Channel subtype = connect:ExternalTask
+ Initiation method = API
+ References includes the URL to the third-party system

![\[Contact details for an external contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluate-external-activities2.png)
