可用的基準任務 - Amazon Nova

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

可用的基準任務

提供範例程式碼套件,示範如何使用 Amazon Nova 的 SageMaker AI 模型評估功能來計算基準指標。若要存取程式碼套件,請參閱 sample-Nova-lighteval-custom-task

以下是支援的可用產業標準基準清單。您可以在 eval_task 參數中指定下列基準。

Benchmark

模態

Description

指標

策略

子任務可用

mmlu

文字

多任務語言理解 – 測試 57 個主題的知識。

正確性

zs_cot

mmlu_pro

文字

MMLU – 專業子集 – 專注於專業領域,例如法律、醫學、會計和工程。

正確性

zs_cot

bbh

文字

進階推理任務 – 一系列挑戰性問題,可測試高階認知和問題解決技能。

正確性

zs_cot

gpqa

文字

一般物理問題回答 – 評估對物理概念的理解及解決相關問題的能力。

正確性

zs_cot

數學運算

文字

數學問題解決 – 測量代數、微積分和應用題等主題的數學推理能力。

exact_match

zs_cot

strong_reject

文字

品質控管任務 – 測試模型可偵測和拒絕不適當、有害或不正確內容的能力。

偏轉

zs

IFEval

文字

指示追蹤評估 – 測量模型遵循指定指示的準確度,並根據規格完成任務。

正確性

zs

gen_qa

文字

自訂資料集評估 – 可讓您使用自有資料集進行基準測試,並使用如 ROUGE 和 BLEU 等指標將模型輸出與參考回答進行比較。

全部

gen_qa

llm_judge

文字

LLM 即評審偏好比較 – 使用 Nova Judge 模型來針對您的提示判斷配對回應 (B 與 A 比較) 之間的偏好,並計算較 A 偏好 B 的機率。

全部

評審

humaneval

文字

HumanEval - 旨在評估大型語言模型程式碼產生功能的基準資料集

pass@1

zs

mm_llm_judge

多模態 (影像)

這個新基準的行為與llm_judge上述以文字為基礎的行為相同。唯一的差別是它支援映像推論。

全部

評審

rubric_llm_judge

Text

Rubric Judge 是建置在 Nova 2.0 Lite 上的增強型 LLM-as-a-judge 評估模型。與只提供偏好判定的原始判斷模型不同,Rubbric Judge 會動態產生為每個提示量身打造的自訂評估條件,並跨多個維度指派精細分數。

全部

評審

aime_2024

Text

AIME 2024 - 美國邀請數學檢查問題測試進階數學推理和問題解決

exact_match

zs_cot

No

calendar_scheduling

Text

Natural Plan - 行事曆排程任務測試規劃能力,以跨多天和多人安排會議

exact_match

fs

No

下列是可用的 mmlu 子任務:

MMLU_SUBTASKS = [ "abstract_algebra", "anatomy", "astronomy", "business_ethics", "clinical_knowledge", "college_biology", "college_chemistry", "college_computer_science", "college_mathematics", "college_medicine", "college_physics", "computer_security", "conceptual_physics", "econometrics", "electrical_engineering", "elementary_mathematics", "formal_logic", "global_facts", "high_school_biology", "high_school_chemistry", "high_school_computer_science", "high_school_european_history", "high_school_geography", "high_school_government_and_politics", "high_school_macroeconomics", "high_school_mathematics", "high_school_microeconomics", "high_school_physics", "high_school_psychology", "high_school_statistics", "high_school_us_history", "high_school_world_history", "human_aging", "human_sexuality", "international_law", "jurisprudence", "logical_fallacies", "machine_learning", "management", "marketing", "medical_genetics", "miscellaneous", "moral_disputes", "moral_scenarios", "nutrition", "philosophy", "prehistory", "professional_accounting", "professional_law", "professional_medicine", "professional_psychology", "public_relations", "security_studies", "sociology", "us_foreign_policy", "virology", "world_religions" ]

下列是可用的 bbh 子任務:

BBH_SUBTASKS = [ "boolean_expressions", "causal_judgement", "date_understanding", "disambiguation_qa", "dyck_languages", "formal_fallacies", "geometric_shapes", "hyperbaton", "logical_deduction_five_objects", "logical_deduction_seven_objects", "logical_deduction_three_objects", "movie_recommendation", "multistep_arithmetic_two", "navigate", "object_counting", "penguins_in_a_table", "reasoning_about_colored_objects", "ruin_names", "salient_translation_error_detection", "snarks", "sports_understanding", "temporal_sequences", "tracking_shuffled_objects_five_objects", "tracking_shuffled_objects_seven_objects", "tracking_shuffled_objects_three_objects", "web_of_lies", "word_sorting" ]

下列是可用的 math 子任務:

MATH_SUBTASKS = [ "algebra", "counting_and_probability", "geometry", "intermediate_algebra", "number_theory", "prealgebra", "precalculus", ]