Skip to content

/AWS1/CL_BDCEVALRESULTCONTENT

The comprehensive result of an evaluation containing the score, explanation, evaluator metadata, and execution details. Provides both quantitative ratings and qualitative insights about agent performance.

CONSTRUCTOR

IMPORTING

Required arguments:

iv_evaluatorarn TYPE /AWS1/BDCEVALUATORARN /AWS1/BDCEVALUATORARN

The Amazon Resource Name (ARN) of the evaluator used to generate this result. For custom evaluators, this is the full ARN; for built-in evaluators, this follows the pattern Builtin.{EvaluatorName}.

iv_evaluatorid TYPE /AWS1/BDCEVALUATORID /AWS1/BDCEVALUATORID

The unique identifier of the evaluator that produced this result. This matches the evaluatorId provided in the evaluation request and can be used to identify which evaluator generated specific results.

iv_evaluatorname TYPE /AWS1/BDCEVALUATORNAME /AWS1/BDCEVALUATORNAME

The human-readable name of the evaluator used for this evaluation. For built-in evaluators, this is the descriptive name (e.g., "Helpfulness", "Correctness"); for custom evaluators, this is the user-defined name.

io_context TYPE REF TO /AWS1/CL_BDCCONTEXT /AWS1/CL_BDCCONTEXT

The contextual information associated with this evaluation result, including span context details that identify the specific traces and sessions that were evaluated.

Optional arguments:

iv_explanation TYPE /AWS1/BDCEVALUATIONEXPLANATION /AWS1/BDCEVALUATIONEXPLANATION

The detailed explanation provided by the evaluator describing the reasoning behind the assigned score. This qualitative feedback helps understand why specific ratings were given and provides actionable insights for improvement.

iv_value TYPE /AWS1/RT_DOUBLE_AS_STRING /AWS1/RT_DOUBLE_AS_STRING

The numerical score assigned by the evaluator according to its configured rating scale. For numerical scales, this is a decimal value within the defined range. This field is not allowed for categorical scales.

iv_label TYPE /AWS1/BDCSTRING /AWS1/BDCSTRING

The categorical label assigned by the evaluator when using a categorical rating scale. This provides a human-readable description of the evaluation result (e.g., "Excellent", "Good", "Poor") corresponding to the numerical value. For numerical scales, this field is optional and provides a natural language explanation of what the value means (e.g., value 0.5 = "Somewhat Helpful").

io_tokenusage TYPE REF TO /AWS1/CL_BDCTOKENUSAGE /AWS1/CL_BDCTOKENUSAGE

The token consumption statistics for this evaluation, including input tokens, output tokens, and total tokens used by the underlying language model during the evaluation process.

iv_errormessage TYPE /AWS1/BDCEVALERRORMESSAGE /AWS1/BDCEVALERRORMESSAGE

The error message describing what went wrong if the evaluation failed. Provides detailed information about evaluation failures to help diagnose and resolve issues with evaluator configuration or input data.

iv_errorcode TYPE /AWS1/BDCEVALUATIONERRORCODE /AWS1/BDCEVALUATIONERRORCODE

The error code indicating the type of failure that occurred during evaluation. Used to programmatically identify and handle different categories of evaluation errors.


Queryable Attributes

evaluatorArn

The Amazon Resource Name (ARN) of the evaluator used to generate this result. For custom evaluators, this is the full ARN; for built-in evaluators, this follows the pattern Builtin.{EvaluatorName}.

Accessible with the following methods

Method Description
GET_EVALUATORARN() Getter for EVALUATORARN, with configurable default
ASK_EVALUATORARN() Getter for EVALUATORARN w/ exceptions if field has no value
HAS_EVALUATORARN() Determine if EVALUATORARN has a value

evaluatorId

The unique identifier of the evaluator that produced this result. This matches the evaluatorId provided in the evaluation request and can be used to identify which evaluator generated specific results.

Accessible with the following methods

Method Description
GET_EVALUATORID() Getter for EVALUATORID, with configurable default
ASK_EVALUATORID() Getter for EVALUATORID w/ exceptions if field has no value
HAS_EVALUATORID() Determine if EVALUATORID has a value

evaluatorName

The human-readable name of the evaluator used for this evaluation. For built-in evaluators, this is the descriptive name (e.g., "Helpfulness", "Correctness"); for custom evaluators, this is the user-defined name.

Accessible with the following methods

Method Description
GET_EVALUATORNAME() Getter for EVALUATORNAME, with configurable default
ASK_EVALUATORNAME() Getter for EVALUATORNAME w/ exceptions if field has no value
HAS_EVALUATORNAME() Determine if EVALUATORNAME has a value

explanation

The detailed explanation provided by the evaluator describing the reasoning behind the assigned score. This qualitative feedback helps understand why specific ratings were given and provides actionable insights for improvement.

Accessible with the following methods

Method Description
GET_EXPLANATION() Getter for EXPLANATION, with configurable default
ASK_EXPLANATION() Getter for EXPLANATION w/ exceptions if field has no value
HAS_EXPLANATION() Determine if EXPLANATION has a value

context

The contextual information associated with this evaluation result, including span context details that identify the specific traces and sessions that were evaluated.

Accessible with the following methods

Method Description
GET_CONTEXT() Getter for CONTEXT

value

The numerical score assigned by the evaluator according to its configured rating scale. For numerical scales, this is a decimal value within the defined range. This field is not allowed for categorical scales.

Accessible with the following methods

Method Description
GET_VALUE() Getter for VALUE, with configurable default
ASK_VALUE() Getter for VALUE w/ exceptions if field has no value
STR_VALUE() String format for VALUE, with configurable default
HAS_VALUE() Determine if VALUE has a value

label

The categorical label assigned by the evaluator when using a categorical rating scale. This provides a human-readable description of the evaluation result (e.g., "Excellent", "Good", "Poor") corresponding to the numerical value. For numerical scales, this field is optional and provides a natural language explanation of what the value means (e.g., value 0.5 = "Somewhat Helpful").

Accessible with the following methods

Method Description
GET_LABEL() Getter for LABEL, with configurable default
ASK_LABEL() Getter for LABEL w/ exceptions if field has no value
HAS_LABEL() Determine if LABEL has a value

tokenUsage

The token consumption statistics for this evaluation, including input tokens, output tokens, and total tokens used by the underlying language model during the evaluation process.

Accessible with the following methods

Method Description
GET_TOKENUSAGE() Getter for TOKENUSAGE

errorMessage

The error message describing what went wrong if the evaluation failed. Provides detailed information about evaluation failures to help diagnose and resolve issues with evaluator configuration or input data.

Accessible with the following methods

Method Description
GET_ERRORMESSAGE() Getter for ERRORMESSAGE, with configurable default
ASK_ERRORMESSAGE() Getter for ERRORMESSAGE w/ exceptions if field has no value
HAS_ERRORMESSAGE() Determine if ERRORMESSAGE has a value

errorCode

The error code indicating the type of failure that occurred during evaluation. Used to programmatically identify and handle different categories of evaluation errors.

Accessible with the following methods

Method Description
GET_ERRORCODE() Getter for ERRORCODE, with configurable default
ASK_ERRORCODE() Getter for ERRORCODE w/ exceptions if field has no value
HAS_ERRORCODE() Determine if ERRORCODE has a value

Public Local Types In This Class

Internal table types, representing arrays and maps of this class, are defined as local types:

TT_EVALUATIONRESULTS

TYPES TT_EVALUATIONRESULTS TYPE STANDARD TABLE OF REF TO /AWS1/CL_BDCEVALRESULTCONTENT WITH DEFAULT KEY
.