LlmAsAJudgeEvaluatorConfig
The configuration for LLM-as-a-Judge evaluation that uses a language model to assess agent performance based on custom instructions and rating scales.
Types
Properties
Link copied to clipboard
The evaluation instructions that guide the language model in assessing agent performance, including criteria and evaluation guidelines.
Link copied to clipboard
The model configuration that specifies which foundation model to use and how to configure it for evaluation.
Link copied to clipboard
The rating scale that defines how the evaluator should score agent performance, either numerical or categorical.
Functions
Link copied to clipboard
inline fun copy(block: LlmAsAJudgeEvaluatorConfig.Builder.() -> Unit = {}): LlmAsAJudgeEvaluatorConfig