LlmAsAJudgeEvaluatorConfig
The configuration for LLM-as-a-Judge evaluation that uses a language model to assess agent performance based on custom instructions and rating scales.
Contents
- instructions
-
The evaluation instructions that guide the language model in assessing agent performance, including criteria and evaluation guidelines.
Type: String
Required: Yes
- modelConfig
-
The model configuration that specifies which foundation model to use and how to configure it for evaluation.
Type: EvaluatorModelConfig object
Note: This object is a Union. Only one member of this object can be specified or returned.
Required: Yes
- ratingScale
-
The rating scale that defines how the evaluator should score agent performance, either numerical or categorical.
Type: RatingScale object
Note: This object is a Union. Only one member of this object can be specified or returned.
Required: Yes
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: