/AWS1/CL_BDKEVALDSMETRICCONFIG¶
Defines the prompt datasets, built-in metric names and custom metric names, and the task type.
CONSTRUCTOR
¶
IMPORTING¶
Required arguments:¶
iv_tasktype
TYPE /AWS1/BDKEVALUATIONTASKTYPE
/AWS1/BDKEVALUATIONTASKTYPE
¶
The the type of task you want to evaluate for your evaluation job. This applies only to model evaluation jobs and is ignored for knowledge base evaluation jobs.
io_dataset
TYPE REF TO /AWS1/CL_BDKEVALUATIONDATASET
/AWS1/CL_BDKEVALUATIONDATASET
¶
Specifies the prompt dataset.
it_metricnames
TYPE /AWS1/CL_BDKEVALMETRICNAMES_W=>TT_EVALUATIONMETRICNAMES
TT_EVALUATIONMETRICNAMES
¶
The names of the metrics you want to use for your evaluation job.
For knowledge base evaluation jobs that evaluate retrieval only, valid values are "
Builtin.ContextRelevance
", "Builtin.ContextCoverage
".For knowledge base evaluation jobs that evaluate retrieval with response generation, valid values are "
Builtin.Correctness
", "Builtin.Completeness
", "Builtin.Helpfulness
", "Builtin.LogicalCoherence
", "Builtin.Faithfulness
", "Builtin.Harmfulness
", "Builtin.Stereotyping
", "Builtin.Refusal
".For automated model evaluation jobs, valid values are "
Builtin.Accuracy
", "Builtin.Robustness
", and "Builtin.Toxicity
". In model evaluation jobs that use a LLM as judge you can specify "Builtin.Correctness
", "Builtin.Completeness"
, "Builtin.Faithfulness"
, "Builtin.Helpfulness
", "Builtin.Coherence
", "Builtin.Relevance
", "Builtin.FollowingInstructions
", "Builtin.ProfessionalStyleAndTone
", You can also specify the following responsible AI related metrics only for model evaluation job that use a LLM as judge "Builtin.Harmfulness
", "Builtin.Stereotyping
", and "Builtin.Refusal
".For human-based model evaluation jobs, the list of strings must match the
name
parameter specified inHumanEvaluationCustomMetric
.
Queryable Attributes¶
taskType¶
The the type of task you want to evaluate for your evaluation job. This applies only to model evaluation jobs and is ignored for knowledge base evaluation jobs.
Accessible with the following methods¶
Method | Description |
---|---|
GET_TASKTYPE() |
Getter for TASKTYPE, with configurable default |
ASK_TASKTYPE() |
Getter for TASKTYPE w/ exceptions if field has no value |
HAS_TASKTYPE() |
Determine if TASKTYPE has a value |
dataset¶
Specifies the prompt dataset.
Accessible with the following methods¶
Method | Description |
---|---|
GET_DATASET() |
Getter for DATASET |
metricNames¶
The names of the metrics you want to use for your evaluation job.
For knowledge base evaluation jobs that evaluate retrieval only, valid values are "
Builtin.ContextRelevance
", "Builtin.ContextCoverage
".For knowledge base evaluation jobs that evaluate retrieval with response generation, valid values are "
Builtin.Correctness
", "Builtin.Completeness
", "Builtin.Helpfulness
", "Builtin.LogicalCoherence
", "Builtin.Faithfulness
", "Builtin.Harmfulness
", "Builtin.Stereotyping
", "Builtin.Refusal
".For automated model evaluation jobs, valid values are "
Builtin.Accuracy
", "Builtin.Robustness
", and "Builtin.Toxicity
". In model evaluation jobs that use a LLM as judge you can specify "Builtin.Correctness
", "Builtin.Completeness"
, "Builtin.Faithfulness"
, "Builtin.Helpfulness
", "Builtin.Coherence
", "Builtin.Relevance
", "Builtin.FollowingInstructions
", "Builtin.ProfessionalStyleAndTone
", You can also specify the following responsible AI related metrics only for model evaluation job that use a LLM as judge "Builtin.Harmfulness
", "Builtin.Stereotyping
", and "Builtin.Refusal
".For human-based model evaluation jobs, the list of strings must match the
name
parameter specified inHumanEvaluationCustomMetric
.
Accessible with the following methods¶
Method | Description |
---|---|
GET_METRICNAMES() |
Getter for METRICNAMES, with configurable default |
ASK_METRICNAMES() |
Getter for METRICNAMES w/ exceptions if field has no value |
HAS_METRICNAMES() |
Determine if METRICNAMES has a value |
Public Local Types In This Class¶
Internal table types, representing arrays and maps of this class, are defined as local types:
TT_EVALDATASETMETRICCONFIGS
¶
TYPES TT_EVALDATASETMETRICCONFIGS TYPE STANDARD TABLE OF REF TO /AWS1/CL_BDKEVALDSMETRICCONFIG WITH DEFAULT KEY
.