LlmAsAJudgeEvaluatorConfig

The configuration for LLM-as-a-Judge evaluation that uses a language model to assess agent performance based on custom instructions and rating scales.

Types

Link copied to clipboard
class Builder
Link copied to clipboard
object Companion

Properties

Link copied to clipboard

The evaluation instructions that guide the language model in assessing agent performance, including criteria and evaluation guidelines.

Link copied to clipboard

The model configuration that specifies which foundation model to use and how to configure it for evaluation.

Link copied to clipboard

The rating scale that defines how the evaluator should score agent performance, either numerical or categorical.

Functions

Link copied to clipboard
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String