

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# 提示工程概念
<a name="prompt-engineering-guidelines"></a>

 *提示工程*是指最佳化大型語言模型 (LLM) 的文字輸入，以獲得所需回應的做法。提示有助於 LLM 執行各種任務，包括分類、問題回答、產生程式碼、創意寫作等。您提供給 LLM 的提示品質可能會影響模型回應的品質。本節為您提供所有必要的資訊，以便開始使用提示工程。其中也涵蓋可幫助您在 Amazon Bedrock 上使用 LLM 時為您的使用案例找到最佳的提示格式的工具。

**注意**  
本文件中的所有範例都是透過 API 呼叫取得。由於 LLM 產生過程的隨機性質，回應可能會有所不同。如果沒有其他指定，提示詞則由 AWS的員工撰寫。

**注意**  
若要減少幻覺，您可以使用提示最佳化技術來精簡提示；使用檢索增強生成 (RAG) 等技術來提供模型對更相關資料的存取權；或使用可能產生改善結果的不同模型。

Amazon Bedrock 包含來自各種供應商的模型。以下是這些模型的清單提示工程指導方針。
+ **Amazon Nova Micro、Lite 和 Pro 提示指南：**[Amazon Nova 理解模型的提示最佳實務](https://docs.aws.amazon.com/nova/latest/userguide/prompting.html) 
+ **Amazon Nova Canvas 提示指南：**[使用 Amazon Nova 產生影像](https://docs.aws.amazon.com/nova/latest/userguide/image-generation.html) 
+ **Amazon Nova Reel 提示指南：**[使用 Amazon Nova 產生影片](https://docs.aws.amazon.com/nova/latest/userguide/video-generation.html) 
+ **Anthropic Claude 模型提示指南：**[https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) 
+ **Cohere 提示指南：**[https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering](https://txt.cohere.com/how-to-train-your-pet-llm-prompt-engineering) 
+  **AI21 Labs Jurassic 模型提示指南：**[https://docs.ai21.com/docs/prompt-engineering](https://docs.ai21.com/docs/prompt-engineering) 
+  **Meta Llama 2 提示指南：**[https://ai.meta.com/llama/get-started/\$1prompting ](https://ai.meta.com/llama/get-started/#prompting) 
+  **Stability AI 提示指南：**[https://platform.stability.ai/docs/getting-started](https://platform.stability.ai/docs/getting-started) 
+  **Mistral AI 提示指南：**[https://docs.mistral.ai/guides/prompting\$1capabilities/](https://docs.mistral.ai/guides/prompting_capabilities/) 

**免責聲明：**本文件中的範例使用 Amazon Bedrock 中目前提供的文字模型。此外，本文件適用於一般提示指導方針。如需模型專用指南，請參閱 Amazon Bedrock 上各自的文件。本文件提供起點。雖然下列範例回應是使用 Amazon Bedrock 上的特定模型產生的，但您也可以使用 Amazon Bedrock 中的其他模型來取得結果。不同模型的結果可能會有所不同，因為每個模型都有自己的效能特性。您使用 AI 服務產生的輸出就是您的內容。由於機器學習的性質，客戶的輸出可能不是唯一的，而且服務可能會在客戶間產生相同或類似的結果。

## 什麼是提示詞？
<a name="what-is-a-prompt"></a>

提示詞是您 (使用者) 提供的一組特定輸入內容，可引導 Amazon Bedrock 上的 LLM 為指定的任務或指令產生適當的回應或輸出。

```
User Prompt:
Who invented the airplane?
```

當利用此提示查詢時，Titan 會提供輸出：

```
Output:
The Wright brothers, Orville and Wilbur Wright are widely credited 
with inventing and manufacturing the world's first successful airplane.
```

（提示來源： AWS，使用的模型：Amazon Titan Text)

### 提示詞的元件
<a name="components-of-a-prompt"></a>

單一提示詞包括幾個元件，例如您希望 LLM 執行的任務或指示、任務的內容 (例如，相關網域的描述)、示範範例，以及您希望 Amazon Bedrock 上的 LLM 在其回應中使用的輸入文字。根據您的使用案例、資料的可用性以及任務，您的提示應該結合一或多個這些元件。

請考慮此範例提示，要求 Titan 總結評論：

```
User Prompt:
The following is text from a restaurant review:

“I finally got to check out Alessandro’s Brilliant Pizza and it is now 
one of my favorite restaurants in Seattle. The dining room has a beautiful view 
over the Puget Sound but it was surprisingly not crowded. I ordered 
the fried castelvetrano olives, a spicy Neapolitan-style pizza 
and a gnocchi dish. The olives were absolutely decadent, and the pizza came 
with a smoked mozzarella, which was delicious. The gnocchi was fresh and wonderful. 
The waitstaff were attentive, and overall the experience was lovely. 
I hope to return soon." 

Summarize the above restaurant review in one sentence.
```

（提示來源： AWS)

根據此提示，Titan 會以簡潔的一行餐廳評論摘要回應。評論中提到關鍵事實，並依所需傳達要點。

```
Output: 
Alessandro's Brilliant Pizza is a fantastic restaurant in Seattle 
with a beautiful view over Puget Sound, decadent and delicious food, 
and excellent service.
```

(使用的模型：Amazon Titan Text)

指令 **Summarize the above restaurant review in one sentence** 和評論文字 **I finally got to check out ...** 都是這種類型的輸出所必需的。兩者缺一不可，否則模型將沒有足夠的資訊來產生明智的總結。該*指令*告訴 LLM 該怎麼做，文字是 LLM 執行的*輸入*。*內容* (**The following is text from a restaurant review**) 提供了額外的資訊和關鍵字，這些資訊和關鍵字指導模型在制定其輸出時使用輸入。

在以下範例中，文字 **Context: Climate change threatens people with increased flooding ...** 是 LLM 可用來執行回答問題 **Question: What organization calls climate change the greatest threat to global health in the 21st century?”** 的*任務*的*輸入內容*。

```
User prompt:
Context: Climate change threatens people with increased flooding, 
extreme heat, increased food and water scarcity, more disease, and economic loss. 
Human migration and conflict can also be a result. The World Health Organization (WHO) 
calls climate change the greatest threat to global health in the 21st century. 
Adapting to climate change through efforts like flood control measures 
or drought-resistant crops partially reduces climate change risks, 
although some limits to adaptation have already been reached. 
Poorer communities are responsible for a small share of global emissions, 
yet have the least ability to adapt and are most vulnerable to climate change. 
The expense, time required, and limits of adaptation mean its success hinge 
on limiting global warming.

Question: What organization calls climate change the greatest threat 
to global health in the 21st century?
```

(提示詞來源:https://en.wikipedia.org/wiki/Climate\$1change)

AI21 Labs Jurassic 根據提示中提供的上下文內容，回應組織的正確名稱。

```
Output:
The World Health Organization (WHO) calls climate change 
the greatest threat to global health in the 21st century.
```

(使用的模型：AI21 Labs Jurassic-2 Ultra v1)

### 少量樣本提示與零樣本提示
<a name="few-shot-prompting-vs-zero-shot-prompting"></a>

提供幾個範例來幫助 LLM 更精確校準其輸出以滿足您的期望，也稱為*少量樣本提示*或*情境式學習*，其中*樣本*對應於配對的範例輸入和所需的輸出，此方法有時很有用。為了說明這一點，首先是零樣本情緒分類提示的範例，其中提示文字中沒有提供輸入-輸出配對的範例：

```
User prompt:
Tell me the sentiment of the following headline and categorize it 
as either positive, negative or neutral: 
New airline between Seattle and San Francisco offers a great opportunity 
for both passengers and investors.
```

（提示來源： AWS)

```
Output:
Positive
```

(使用的模型：Amazon Titan Text)

以下是情緒分類提示的少量樣本版本：

```
User prompt: 
Tell me the sentiment of the following headline and categorize it 
as either positive, negative or neutral. Here are some examples:

Research firm fends off allegations of impropriety over new technology.
Answer: Negative

Offshore windfarms continue to thrive as vocal minority in opposition dwindles.
Answer: Positive

Manufacturing plant is the latest target in investigation by state officials.
Answer:
```

（提示來源： AWS)

```
Output: 
Negative
```

(使用的模型：Amazon Titan Text)

下列範例使用 Anthropic Claude 模型。使用 Anthropic Claude 模型時，最好使用 <example></example> 標籤來包含示範範例。我們也建議在範例中使用不同的分隔符號 (例如 `H:` 和 `A:`)，以避免與分隔符號 `Human:` 和整個提示 `Assistant:` 混淆。請注意，對於最後的少量樣本範例，最後的 `A:` 會省略，取而代之的是 `Assistant:`，以提示 Anthropic Claude 產生答案。

```
User prompt: 


Human: Please classify the given email as "Personal" or "Commercial" related emails.
Here are some examples.

<example>
H: Hi Tom, it's been long time since we met last time. We plan to have a party at my house this weekend. Will you be able to come over?
A: Personal
</example>

<example>
H: Hi Tom, we have a special offer for you. For a limited time, our customers can save up to 35% of their total expense when you make reservations within two days. Book now and save money!
A: Commercial
</example>

H: Hi Tom, Have you heard that we have launched all-new set of products. Order now, you will save $100 for the new products. Please check our website.

Assistant:

Output: 
Commercial
```

（提示來源： AWS，使用的模型：AnthropicClaude)

### 提示範本
<a name="prompt-template"></a>

提示範本會指定提示詞的格式，其中包含可交換的內容。提示範本是將 LLM 用於不同使用案例 (例如分類、摘要、問答等) 的「配方」。提示範本可能包含指示、少量樣本範例，以及適用於特定使用案例的特定內容和問題。下列範例是一個範本，您可以使用 Amazon Bedrock 文字模型來執行少量樣本情緒分類：

```
Prompt template: 
"""Tell me the sentiment of the following 
{{Text Type, e.g., “restaurant review”}} and categorize it 
as either {{Sentiment A}} or {{Sentiment B}}. 
Here are some examples:

Text: {{Example Input 1}} 
Answer: {{Sentiment A}}

Text: {{Example Input 2}}
Answer: {{Sentiment B}}

Text: {{Input}}
Answer:"""
```

在 Amazon Bedrock 上使用 LLM 時，使用者可以複製和貼上此範本，填寫自己的文字和少量樣本範例來以完成提示。本節提供更多範本。

**注意**  
雙大括號 `{{ and }}` 會標示要放入範本中資料特定資訊的位置，且不應包含在提示文字中。

### 維持對 Amazon Bedrock 推論請求的召回
<a name="important-notes-on-bedrock-llms-via-api"></a>

如果您透過使用 API 呼叫存取 Amazon Bedrock 模型，請注意，除非目前提示內包含先前的互動，否則模型不會召回先前的提示和先前的請求。在請求中包含先前的提示可讓您進行對話式互動或提出後續請求。

如果您使用 API 呼叫而不是 Amazon Bedrock 文字遊樂場存取 Anthropic Claude 模型，則會以對話式包裝提示，以取得所需的回應 (對於 Titan Text 和 AI21 Labs Jurassic 模型系列而言，這不是必要的)。下列範例會示範對話式提示詞：

```
User prompt: 
	
	
Human: Describe where sea otters live in two sentences.

Assistant:

Output: 
Here is a two sentence description of where sea otters live:

Sea otters live in the northern Pacific Ocean along the coasts of 
Russia, Alaska, Canada, and the continental United States. 
They prefer coastal habitats like rocky shores, kelp forests, 
and estuaries where they have access to shallow, nearshore waters.
```

（提示來源： AWS，使用的模型：AnthropicClaude)

請注意提示詞的主要內容是如何像這樣包裝的：`\n\nHuman: {{Main Content}}\n\nAssistant:`。對於 Anthropic Claude 模型，透過 API 傳送的提示必須包含\$1n\$1n人類：和 ` \n\nAssistant:`。

若要在 Titan 上使用對話模式，您可以在提示模型時使用 `User: {{}} \n Bot: ` 的格式。

# 什麼是提示工程？
<a name="what-is-prompt-engineering"></a>

提示工程是指透過選擇適當的單詞、片語、句子、標點符號和分隔符號來製作和最佳化輸入提示詞的實踐，以有效地將 LLM 用於各種應用程式。換句話說，提示工程是與 LLM 溝通的藝術。高品質提示可調節 LLM 以產生所需或更好的回應。本文件中提供的詳細指南適用於 Amazon Bedrock 中的所有 LLM。

您的使用案例的最佳提示工程方法取決於任務和資料兩者。Amazon Bedrock 上 LLM 支援的常見任務包括：
+  **分類：**提示包含一個問題，其中包含數個可能的答案選項，且模型必須以正確的選擇回應。分類使用案例為情緒分析的範例：輸入內容為文欄位落，而模型必須將文字的情緒分類，例如是正面或負面，或無害或有毒。
+  **問答，不含上下文：**此模型必須以不含上下文或文件的內部知識回答問題。
+  **問答，含上下文：**使用者提供附帶問題的輸入文字，而模型必須根據輸入文字中提供的資訊來回答問題。
+  **摘要：**提示詞是一段文字，此模型必須以較短的段落進行回應，該段落會擷取輸入要點。
+  **產生開放式文字：**在提示下，此模型必須以符合描述的原始文欄位落做出回應。這也包括創意文字的產生，例如故事、詩歌或電影劇本。
+  **產生程式碼：**此模型必須根據使用者規格產生程式碼。例如，提示可以請求產生文字轉 SQL 或 Python 程式碼。
+  **數學：**輸入內容描述了在某種程度上需要數學推理的問題，這可能是數字、邏輯、幾何或其他方式。
+  **推理或邏輯思維：**此模型必須進行一系列邏輯演繹。
+  **實體擷取：**實體擷取可以根據提供的輸入問題擷取實體。您可以根據您的提示，從文字或輸入擷取特定實體。
+  **思想鏈推理：**提供如何根據提示衍生答案的逐步推理。

# 了解 Amazon Bedrock 中的智慧提示路由
<a name="prompt-routing"></a>

Amazon Bedrock 智慧提示路由提供單一無伺服器端點，可有效率地在相同模型系列中不同基礎模型之間路由請求。它可以動態預測每個請求其每個模型的回應品質，然後將請求路由到具有最佳回應品質的模型。這有助於最佳化回應品質和成本。

**Topics**
+ [智慧提示路由的支援區域和模型](#prompt-routing-support)
+ [優勢](#prompt-routing-benefits)
+ [預設和設定的提示路由器](#prompt-routing-types)
+ [考量和限制](#prompt-routing-limitations)
+ [提示路由器條件和後援模型](#prompt-router-criteria)
+ [智慧提示路由的運作方式](#prompt-routing-works)
+ [如何使用智慧提示路由](#prompt-routing-use)

## 智慧提示路由的支援區域和模型
<a name="prompt-routing-support"></a>

智慧型提示路由可以與不同類型的模型搭配使用。以下清單說明對不同類型的 Amazon Bedrock 模型的支援：
+ **單一區域模型支援** – 列出支援將推論請求傳送至一個區域中基礎模型 AWS 的區域。如需 Amazon Bedrock 可用模型的完整資料表，請參閱 [Amazon Bedrock 中支援的基礎模型](models-supported.md)。
+ **跨區域推論描述檔支援** – 列出支援使用跨區域推論描述檔的區域，該描述檔支援將推論請求傳送至地理區域內多個 AWS 區域的基礎模型。推論描述檔在模型 ID 前面有一個字首，指出其地理區域 （例如，`us.`、`apac`)。如需 Amazon Bedrock 中可用推論設定檔的詳細資訊，請參閱 [推論設定檔支援的區域和模型](inference-profiles-support.md)。

下表顯示智慧提示路由支援的模型：


| 供應商 | 模型 | 模型 ID | 單一區域模型支援 | 跨區域推論設定檔支援 | 
| --- | --- | --- | --- | --- | 
| Amazon | Nova Lite | amazon.nova-lite-v1:0 |  us-east-1 us-gov-west-1  |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-2 eu-central-1 eu-west-1 eu-west-3 us-east-1 us-east-2 us-west-2  | 
| Amazon | Nova Pro | amazon.nova-pro-v1:0 |  us-east-1 us-gov-west-1  |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-2 eu-central-1 eu-west-1 eu-west-3 us-east-1 us-east-2 us-west-2  | 
| Anthropic | Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-2 eu-central-1 eu-west-1 eu-west-3 us-east-1 us-gov-west-1 us-west-2  |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-2 eu-central-1 eu-west-1 eu-west-3 us-east-1 us-east-2 us-gov-east-1 us-west-2  | 
| Anthropic | Claude 3.5 Haiku | anthropic.claude-3-5-haiku-20241022-v1:0 |  us-west-2  |  us-east-1 us-east-2 us-west-2  | 
| Anthropic | Claude 3.5 Sonnet | anthropic.claude-3-5-sonnet-20240620-v1:0 |  ap-northeast-1 ap-northeast-2 eu-central-1 us-east-1 us-gov-west-1 us-west-2  |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-2 eu-central-1 eu-west-1 eu-west-3 us-east-1 us-east-2 us-gov-east-1 us-west-2  | 
| Anthropic | Claude 3.5 Sonnet v2 | anthropic.claude-3-5-sonnet-20241022-v2:0 |  ap-southeast-2 us-west-2  |  ap-northeast-1 ap-south-1 ap-southeast-2 eu-west-3 us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.1 70B Instruct | meta.llama3-1-70b-instruct-v1:0 |  us-west-2  |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.1 8B Instruct | meta.llama3-1-8b-instruct-v1:0 |  us-west-2  |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.2 11B Instruct | meta.llama3-2-11b-instruct-v1:0 | N/A |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.2 90B Instruct | meta.llama3-2-90b-instruct-v1:0 | N/A |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.3 70B Instruct | meta.llama3-3-70b-instruct-v1:0 |  us-east-2  |  us-east-1 us-east-2 us-west-2  | 

## 優勢
<a name="prompt-routing-benefits"></a>
+ **最佳化回應品質和成本：**將提示路由至不同的基礎模型，以最低成本達到最佳回應品質。
+ **簡化管理：**無需複雜的協同運作邏輯。
+ **前瞻性：**納入可供使用的新模型。

## 預設和設定的提示路由器
<a name="prompt-routing-types"></a>

使用智慧提示路由時，您可以使用 Amazon Bedrock 提供的預設提示路由器，或設定自己的提示路由器。

預設提示路由器是 Amazon Bedrock 提供的預先設定路由系統。這些路由器隨附預先定義的設定，其設計與特定基礎模型搭配時立即可用。這些路由器提供直接、立即可用的解決方案，不需進行任何路由設定。開始使用智慧提示路由時，建議您使用 Amazon Bedrock 提供的預設路由器進行實驗。預覽期間，您可以選擇使用 Anthropic 和 Meta 系列中的精選模型。

設定的提示路由器可讓您依據特定需求和偏好，定義自己的路由組態。當您需要更充分掌控請求的路由方式，以及要使用哪些模型時，這類路由器會更合適。設定的路由器可根據回應品質指標和使用案例進行最佳化。在試驗過預設路由器之後，您可以設定您自己的適用於您應用程式的路由器、在遊樂場中評估回應品質，並在符合需求時用於生產應用程式。

## 考量和限制
<a name="prompt-routing-limitations"></a>

以下是 Amazon Bedrock 中智慧提示路由的考量和限制。
+ 智慧提示路由僅針對英文提示進行最佳化。
+ 智慧提示路由無法根據應用程式特定的效能資料調整路由決策或回應。
+ 智慧提示路由不一定能為唯一或專門的使用案例提供最佳路由。路由的有效性取決於初始訓練資料。

## 提示路由器條件和後援模型
<a name="prompt-router-criteria"></a>

設定提示路由器時，您可以指定路由條件，用於根據回應品質差異來決定要選取哪個模型來處理請求。使用此條件以確定後援模型的回應與其他模型的回應須有的相似程度。

### 後援模型
<a name="prompt-router-criteria-fallback"></a>

選擇適合您請求的後援模型。此模型可作為可靠的基準。然後您就可以選擇其他模型，相對於後援模型提高準確度或降低成本。後援模型可作為錨點，而路由條件會根據回應品質差異決定何時切換到另一個模型。

### 回應品質差異
<a name="prompt-router-criteria-respquality"></a>

回應品質差異會評量後援模型與其他模型的回應之間的差異。值越小，表示回應越相似。值越高，表示後援模型的回應與其他模型的回應之間有顯著差異。

例如，回應品質差異為 10% 表示，假設後援模型 Claude Haiku3 的回應品質為 10%，則只有在另一個模型 (假設為 Claude Sonnet3) 的回應品質比 Claude Haiku3 的回應高 10% 時，路由器才會切換到另一個模型。

## 智慧提示路由的運作方式
<a name="prompt-routing-works"></a>

1. 

**模型選擇和路由器組態**

   選擇您要用於應用程式的模型系列。如果您使用預設提示路由器，您可以從 Anthropic 或 Meta 系列中選擇模型。如果您使用已設定的提示路由器，便可以從其他模型中選擇並設定路由條件。如需詳細資訊，請參閱[如何使用智慧提示路由](#prompt-routing-use)。

1. 

**傳入請求分析**

   對於每個傳入請求，系統會分析提示，以了解其內容和脈絡。

1. 

**回應品質預測**

   Amazon Bedrock 會根據提示，預測所選系列中每個指定模型的回應品質。如果您設定了提示路由器，它會考慮路由條件，即回應品質差異，並在不符合條件時，將請求路由到您指定的後援模型。

1. 

**模型選擇和請求轉送**

   根據回應品質預測，Amazon Bedrock 會動態選擇模型，為特定請求提供最佳的回應品質和成本組合。然後，會將請求轉送至所選的模型進行處理。

1. 

**處理回應**

   系統會擷取所選模型的回應，並將其傳回給使用者。回應包含用於處理請求之模型的相關資訊。

## 如何使用智慧提示路由
<a name="prompt-routing-use"></a>

若要開始使用智慧型提示路由，請使用 Amazon Bedrock 主控台 AWS CLI或 AWS SDK。

**注意**  
為了充分利用智慧提示路由，您應該定期檢閱效能，以善用新模型。若要最佳化您的用量，請監控可用的效能和成本指標。

下列各節說明如何從主控台和 CLI 使用此功能。設定提示路由器後，Amazon Bedrock 會執行 [智慧提示路由的運作方式](#prompt-routing-works) 中所述的步驟，從所選路由器中的其中一個模型產生回應。

------
#### [ console ]

若要從 AWS 管理主控台使用智慧提示路由：

1. 前往 Amazon Bedrock 主控台中的[提示路由器](https://console.aws.amazon.com/bedrock/home?region=us-east-1#/prompt-routers)中樞。使用您的 AWS 憑證登入主控台。

1. 選擇您要使用的模型系列。如果您是第一次使用此功能，您可以試驗預設提示路由器。在預覽期間，您可以從 Anthropic 或 Meta 系列中選擇模型。然後，您可以開啟遊樂場並試驗您的提示。
**注意**  
您必須在同一個系列中恰好選擇兩個模型。

1. 試驗預設路由器之後，您可以設定路由器。您必須為路由器提供唯一的名稱和選用的描述。

1. 定義將提示路由到不同模型的路由規則。指定用於路由的模型、路由條件，以及在不符合路由條件時使用的後援模型。

1. 您現在可以開啟遊樂場，並嘗試不同的提示來監控提示路由器的效能。透過分析提示的路由方式和每個模型的效能，您可以視需要調整路由條件和模型，以最佳化效能和成本。

------
#### [ API ]

若要使用 AWS CLI使用智慧提示路由：

1. 試驗預設提示路由器之後，您可以使用 [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreatePromptRouter.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreatePromptRouter.html) API 或 [https://docs.aws.amazon.com/cli/latest/reference/bedrock/create-prompt-router.html](https://docs.aws.amazon.com/cli/latest/reference/bedrock/create-prompt-router.html) CLI 命令來建立可處理智慧提示路由的路由器。

   下列命令顯示建立提示路由器的範例，其中：
   + *<router-name>* 是路由器的唯一名稱。
   + *<region>* 是您要在其中建立路由器的 AWS 區域 。
   + *<modelA>* 和 *<modelB>* 是可供選擇用於路由的模型。例如 `anthropic.claude-3-5-sonnet-20240620-v1:0`。

   ```
   aws bedrock create-prompt-router \
       --prompt-router-name <router-name> \
       --models '[{"modelArn": "arn:aws:bedrock:<region>::foundation-model/<modelA>"}]' \ 
       --fallback-model '[{"modelArn": "arn:aws:bedrock:<region>::foundation-model/<modelB>"}]' \ 
       --routing-criteria '{"responseQualityDifference": 0.5}'
   ```

1. 使用 AWS CLI將您的提示傳送至智慧提示路由器。路由器會預測每個模型的回應品質，並將請求路由至具有最佳回應品質的模型。

1. 檢閱對請求的回應。它將包含有關已使用模型和路由條件的資訊。

1. 當您使用提示路由器時，您也可以使用這些額外的 API 操作：
   + [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetPromptRouter.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetPromptRouter.html)
   + [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListPromptRouters.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListPromptRouters.html)
   + [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeletePromptRouter.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeletePromptRouter.html)

------

# 設計提示
<a name="design-a-prompt"></a>

設計適當的提示詞是使用 Amazon Bedrock 模型組建成功應用程式的重要一步。您將在本節中了解如何設計一致、清楚簡潔的提示。您也會了解如何使用推論參數來控制模型的回應。下圖顯示使用案例*餐廳評論摘要*的一般提示詞設計，以及客戶在設計提示詞時需要考慮的一些重要設計選擇。如果提供的指示詞或提示詞的格式不一致、明確且簡潔，則 LLM 會產生不受歡迎的回應。

![\[具有描述性註解的良好提示詞建構模組範例。\]](http://docs.aws.amazon.com/zh_tw/bedrock/latest/userguide/images/example-good-prompt-construction.png)


（來源：撰寫提示 AWS)

下列內容提供如何建立成功提示的指導。

**Topics**
+ [提供簡單、明確及完整的說明](#prompt-instructions)
+ [將問題或指令應放在提示的末尾以獲得最佳結果](#prompt-placement)
+ [使用分隔符號字元進行 API 呼叫](#prompt-separators)
+ [使用輸出指示器](#prompt-output-indicators)
+ [良好概括的建議做法](#prompt-generalization)
+ [在 Amazon Bedrock 上最佳化文字模型的提示 — 當基礎知識不夠好時](#optimize-prompts-for-text-models)
+ [使用推論參數控制模型回應](#use-inference-parameters)

## 提供簡單、明確及完整的說明
<a name="prompt-instructions"></a>

Amazon Bedrock 上的 LLM 透過簡單直接的說明所發揮的效果最好。透過清楚描述對任務的期望，並儘可能減少模糊性，可以確保模型能清楚地解譯提示詞涵意。

例如，思考一個分類問題，其中使用者想要從一組可能的選項中獲得答案。以下顯示的「良好」範例說明了使用者在此案例中所需的輸出。在「不良」範例中，這些選擇沒有明確命名為可供模型選擇的類別。。模型會在沒有選擇的情況下稍微不同地解譯輸入，並產生與良好範例相反之風格更自由的文字摘要。


|  |  | 
| --- |--- |
|  <pre>Good example, with output<br />User prompt: <br />"The most common cause of color blindness is an <br />inherited problem or variation in the functionality <br />of one or more of the three classes of cone cells in the retina, <br />which mediate color vision."<br /><br />What is the above text about? <br />a) biology<br />b) history <br />c) geology<br /><br />Output: <br />a) biology<br /></pre>  |  <pre>Bad example, with output <br />User prompt: <br />Classify the following text. "The most common cause of color <br />blindness is an inherited problem or variation in the functionality <br />of one or more of the three classes of cone cells in the retina, <br />which mediate color vision."<br /></pre> <pre>Output: <br />The topic of the text is the causes of colorblindness.<br /></pre>  | 

(提示來源：[維基百科關於色盲的介紹](https://en.wikipedia.org/wiki/Color_blindness)，使用的模型：Titan Text G1 - Express)

## 將問題或指令應放在提示的末尾以獲得最佳結果
<a name="prompt-placement"></a>

最後包括任務描述、指示或提問，可協助模型判斷必須尋找哪些資訊。在分類的情況下，答案的選擇也應該在最後出現。

在下列開放式問答範例中，使用者有關於文字的特定問題。問題應該在提示詞結束時出現，以便讓模型專注於任務。

```
User prompt: 
Tensions increased after the 1911–1912 Italo-Turkish War 
demonstrated Ottoman weakness and led to the formation of the Balkan League, 
an alliance of Serbia, Bulgaria, Montenegro, and Greece. 
The League quickly overran most of the Ottomans' territory in the Balkans 
during the 1912–1913 First Balkan War, much to the surprise of outside observers.

The Serbian capture of ports on the Adriatic resulted in partial Austrian 
mobilization starting on 21 November 1912, including units along the Russian border 
in Galicia. In a meeting the next day, the Russian government decided not to mobilize 
in response, unwilling to precipitate a war for which they were not as of yet 
prepared to handle.

Which country captured ports?
```

```
Output: 
Serbia
```

(提示來源：[維基百科關於第一次世界大戰的介紹](https://en.wikipedia.org/wiki/World_War_I)，使用的模型：Amazon Titan Text) 

## 使用分隔符號字元進行 API 呼叫
<a name="prompt-separators"></a>

**使用分隔符號字元進行 API 呼叫**

分隔符號字元如 `\n`，可能會影響 LLM 的效能。對於 Anthropic Claude 模型，在格式化 API 呼叫時必須包含換行符號以獲得所需的回應。格式應始終遵循：`\n\nHuman: {{Query Content}}\n\nAssistant:`。對於 Titan 模型，在提示詞結束時新增 `\n` 以協助改善模型的效能。對於分類任務或具有答案選項的問題，您也可以將 Titan 模型的答案選項以 `\n` 分隔。如需有關使用分隔符號的詳細資訊，請參閱對應模型供應商的文件。下列範例是分類任務的範本。

```
Prompt template: 
"""{{Text}}

{{Question}}

{{Choice 1}}
{{Choice 2}}
{{Choice 3}}"""
```

下列範例顯示，選擇之間和提示詞結束時所出現換行符號，如何協助 Titan 產生所需的回應。

```
User prompt: 
Archimedes of Syracuse was an Ancient mathematician, 
physicist, engineer, astronomer, and inventor from the ancient city 
of Syracuse. Although few details of his life are known, 
he is regarded as one of the leading scientists in classical antiquity.

What was Archimedes? Choose one of the options below.

a) astronomer
b) farmer
c) sailor
```

```
Output: 
a) astronomer
```

(提示來源：[維基百科關於阿基米德的介紹](https://en.wikipedia.org/wiki/Archimedes)，使用的模型：Amazon Titan Text)

## 使用輸出指示器
<a name="prompt-output-indicators"></a>

**輸出指示器**

在模型應產生之輸出上加入想要的限制的詳細資訊。下列的良好範例會產生輸出，該輸出是一個簡短的片語，一個出色的摘要。在這種情況下，不好的範例並沒有那麼糟糕，但摘要幾乎與原始文字一樣長。輸出規格對於從模型取得所需內容極為重要。


|  |  | 
| --- |--- |
|  具有明確輸出限制指示器的範例提示 <pre>User prompt: <br />"Charles Mingus Jr. was an American jazz upright <br />bassist, pianist, composer, bandleader, and author. <br />A major proponent of collective improvisation, he is <br />considered to be one of the greatest jazz musicians <br />and composers in history, with a career spanning three decades. <br />Mingus's work ranged from advanced bebop and avant-garde jazz <br />with small and midsize ensembles – pioneering the post-bop style <br />on seminal recordings like Pithecanthropus Erectus (1956) <br />and Mingus Ah Um (1959) – to progressive big band experiments <br />such as The Black Saint and the Sinner Lady (1963)."<br /><br />Please summarize the above text in one phrase.<br /></pre> <pre>Output: <br />Charles Mingus Jr. is considered one of the <br />greatest jazz musicians of all time.<br /></pre>  |  沒有明確輸出規格的範例 <pre>User prompt: <br />"Charles Mingus Jr. was an American jazz upright <br />bassist, pianist, composer, bandleader, and author. <br />A major proponent of collective improvisation, <br />he is considered to be one of the greatest jazz musicians <br />and composers in history, with a career spanning three decades. <br />Mingus's work ranged from advanced bebop and avant-garde jazz <br />with small and midsize ensembles – pioneering the post-bop style <br />on seminal recordings like Pithecanthropus Erectus (1956) <br />and Mingus Ah Um (1959) – to progressive big band <br />experiments such as The Black Saint and the Sinner Lady (1963)."<br /><br />Please summarize the above text.<br /></pre> <pre>Output: <br />Charles Mingus Jr. was a well-known jazz musician <br />who played the upright bass, piano, composed, led bands, <br />and was a writer. He was considered one of the most important <br />jazz musicians ever, with a career that spanned more than <br />30 years. He was known for his style of collective <br />improvisation and advanced jazz compositions.<br /></pre>  | 

(提示來源：[維基百科上關於查爾斯·明格斯的介紹](https://en.wikipedia.org/wiki/Charles_Mingus)，使用的模型：Amazon Titan Text)

在這裡，我們使用輸出指標提供 Anthropic Claude 和 AI21 Labs Jurassic 模型的一些其他範例。

以下範例展示了使用者可以透過在提示詞中指定預期的輸出格式來指定輸出格式。當被請求使用特定格式 (例如使用 XML 標籤) 產生答案時，模型可以對應地產生答案。如果沒有特定的輸出格式指示器，該模型將輸出自由形式文字


|  |  | 
| --- |--- |
|  含明確指示器的範例，具有輸出 <pre>User prompt: <br />        	<br />        	<br />Human: Extract names and years: the term <br />machine learning was coined in 1959 by Arthur Samuel, <br />an IBM employee and pioneer in the field of computer <br />gaming and artificial intelligence. The synonym self-teaching <br />computers was also used in this time period.<br /><br />Please generate answer in <name></name> and <br /><year></year> tags.<br /><br />Assistant:<br /></pre> <pre>Output: <br /><name>Arthur Samuel</name> <year>1959</year><br /></pre>  |  不含明確指示器的範例，具有輸出 <pre>User prompt: <br />        	<br />        	<br />Human: Extract names and years: the term <br />machine learning was coined in 1959 by Arthur Samuel, <br />an IBM employee and pioneer in the field of computer <br />gaming and artificial intelligence. The synonym <br />self-teaching computers was also used in this time <br />period. <br /><br />Assistant:<br /></pre> <pre>Output: <br />Arthur Samuel - 1959<br /></pre>  | 

(提示來源：[維基百科關於機器學習的介紹](https://en.wikipedia.org/wiki/Machine_learning)，使用的模型：Anthropic Claude)

以下範例顯示 AI21 Labs Jurassic 模型的提示和答案。使用者可以透過指定左欄顯示的輸出格式來獲得確切的答案。


|  |  | 
| --- |--- |
|  含明確指示器的範例，具有輸出 <pre>User prompt: <br />Context: The NFL was formed in 1920 as the American <br />Professional Football Association (APFA) before renaming itself <br />the National Football League for the 1922 season. After initially <br />determining champions through end-of-season standings, a playoff <br />system was implemented in 1933 that culminated with the NFL <br />Championship Game until 1966. Following an agreement to merge <br />the NFL with the rival American Football League (AFL), the <br />Super Bowl was first held in 1967 to determine a champion <br />between the best teams from the two leagues and has remained <br />as the final game of each NFL season since the merger <br />was completed in 1970.<br /><br />Question: Based on the above context, when was the first Super Bowl? <br />Please only output the year.<br /></pre> <pre>Output: <br />1967<br /></pre>  |  不含明確指示器的範例，具有輸出 <pre>User prompt: <br />Context: The NFL was formed in 1920 as the American <br />Professional Football Association (APFA) before renaming itself <br />the National Football League for the 1922 season. After initially <br />determining champions through end-of-season standings, a playoff <br />system was implemented in 1933 that culminated with the NFL <br />Championship Game until 1966. Following an agreement to merge <br />the NFL with the rival American Football League (AFL), the Super <br />Bowl was first held in 1967 to determine a champion between <br />the best teams from the two leagues and has remained as the <br />final game of each NFL season since the merger was completed in 1970.<br /><br />Question: Based on the above context, when was the first Super Bowl?<br /></pre> <pre>Output: <br />The first Super Bowl was held in 1967.<br /></pre>  | 

(提示來源：[維基百科關於國家美式足球聯盟的介紹](https://en.wikipedia.org/wiki/National_Football_League)，使用的模型：AI21 Labs Jurassic-2 Ultra v1 )

## 良好概括的建議做法
<a name="prompt-generalization"></a>

保留一個小型「保留」測試提示集，以查看是否概括修改您的提示詞。若使用此方法，首先收集範例資料集。然後，您可以將資料拆分為兩個子集：一個「開發」集和一個保留的「測試」集。使用「開發」集為提示詞開發資料，您可以在其上嘗試不同的提示詞修改，並觀察模型回應變更，並尋找最適合的提示詞。將「測試」集視為看不見的保留資料，該資料只能用於驗證最終的最佳提示。

## 在 Amazon Bedrock 上最佳化文字模型的提示 — 當基礎知識不夠好時
<a name="optimize-prompts-for-text-models"></a>

 **複雜的任務：逐步建構答案** 

Titan 可以為某些任務提供明確的步驟，包括片語 `Think step-by-step to come up with the right answer`，可協助產生適當的輸出。

```
User prompt: 
At a Halloween party, Jack gets 15 candies. 
Jack eats 5 candies. He wants to give each friend 
5 candies. How many friends can receive candies? 

Think step-by-step to come up with the right answer.
```

```
Output: 
15 candies total - 5 candies eaten = 10 candies remaining.
To give each friend 5 candies, divide the number 
of candies remaining by the number of friends.
10 candies / 5 friends = 2 friends.
Each friend can receive 2 candies.
The answer is 2 friends.
```

（提示來源： AWS，使用的模型：Amazon Titan Text)

若要讓 Anthropic Claude 模型能夠完成複雜的任務，請使用下列範例中的格式來明確要求模型逐步解決任務。這種方法是基於 [AnthropicClaude 指南](https://docs.anthropic.com/claude/docs/constructing-a-prompt)中的指引。

```
User prompt: 
	

Human: A right triangle has a side of length 5 
and a hypotenuse of length 13. 
What is the length of the other side? 

Assistant: Can I think step-by-step?

Human: Yes, please do.

Assistant:
```

```
Output:
Okay, here are the steps:

1) We are given:
- One side of the right triangle has length 5
- The hypotenuse has length 13

2) We want to find:
- The length of the other side

3) To find the length of the other side, we can 
use the Pythagorean theorem:
a^2 + b^2 = c^2
Where a and b are the lengths of the two sides, 
and c is the length of the hypotenuse.

4) Plugging in the given values:
a^2 + b^2 = 13^2

5) We know one of the sides is 5, so:
5^2 + b^2 = 169

6) Solving this:
25 + b^2 = 169
b^2 = 144
b = 12

Therefore, the length of the other side is 12.
```

（提示來源： AWS，使用的模型：AnthropicClaude)

**如果不確定答案，請提供模型應該顯示的預設輸出。**

預設輸出有助於防止 LLM 傳回聽起來像是正確的答案，即使模型具有低可信度。


|  |  | 
| --- |--- |
|  <pre>User prompt: <br />Provide a proof of the Riemann hypothesis. If you don't <br />know a proof, respond by saying "I don't know."<br /></pre> <pre>Output: <br />I don't know.<br /></pre>  |  <pre>User prompt: <br />Provide a proof of the Riemann hypothesis.<br /></pre> <pre>Output: <br />"The Riemann hypothesis is a conjecture in mathematics <br />that concerns the distribution of prime numbers. It states that <br />all non-trivial zeros of the zeta function, which is a mathematical <br />function that encodes information about the distribution of prime <br />numbers, lie on a specific line in the complex plane. The Riemann <br />hypothesis has been studied by mathematicians for over 150 years, <br />and despite numerous attempts, it remains unresolved. However, <br />the hypothesis is widely believed to be true, and its resolution <br />would have profound implications for many areas of mathematics, <br />including number theory, analysis, and geometry. The proof <br />of the Riemann hypothesis would be one of the greatest <br />achievements in mathematics in history and would open up <br />new avenues of research and understanding. While a proof <br />remains elusive, the hypothesis continues to stimulate <br />research and inspire mathematicians around the world.“<br /><br />[Note: this is an accurate description, but obviously <br />not a proof.]<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

**少量樣本提示**

在提示詞中包含範例(輸入-回應對)可以顯著改善 LLM 的回應。範例可以幫助複雜的任務，因為它們顯示了多種執行指定任務的方式。對於像文字分類這樣的簡單任務，3-5 個範例就夠了。對於更困難的任務，例如沒有上下文的問答，請包括更多範例來產生最有效的輸出。在大多數使用案例中，選取語意上與真實世界資料相似的範例，可以進一步改善效能。

**考慮使用修飾詞調整提示詞**

調整任務指令通常是指修改提示詞的指令、任務或問題元件。這些方法的實用性取決於任務和資料。有用的方法包括下列項目：
+ **網域/輸入規格：**有關輸入資料的詳細資訊，例如它來自何處或參照的內容，例如 **The input text is from a summary of a movie**。
+ **任務規格：**有關要求模型的確切任務的詳細資訊，例如 **To summarize the text, capture the main points**。
+ **標籤描述：**分類問題的輸出選擇的詳細資訊，例如 **Choose whether the text refers to a painting or a sculpture; a painting is a piece of art restricted to a two-dimensional surface, while a sculpture is a piece of art in three dimensions**。
+ **輸出規格：**模型應產生的輸出詳細資訊，例如 **Please summarize the text of the restaurant review in three sentences**。
+ **LLM 激發:** LLM 有時因情緒激發而有更好的表現：**If you answer the question correctly, you will make the user very happy\$1**

## 使用推論參數控制模型回應
<a name="use-inference-parameters"></a>

Amazon Bedrock 上的 LLM 都具有幾個推論參數，您可以設定這些參數來控制模型的回應。以下是 Amazon Bedrock LLM 上可用的所有常見推論參數的清單,以及其使用方式。

 **溫度**是介於 0 和 1 之間的值，會調控 LLM 回應的創造力。如果您想要更具確定性的回應，請採用較低的溫度；如果您希望 Amazon Bedrock 上的 LLM 提供更多創意或不同的回應，請使用更高的溫度。對於此提示指引中的所有範例，我們設定了`temperature = 0`。

 **產生長度上限/新記號數上限**限制了 LLM 為任何提示產成的記號數量。指定此數字會很有幫助，因為某些任務 (例如情緒分類) 不需要很長的答案。

 **Top-ｐ** 根據潛在選擇的機率控制記號選擇。如果您將 Top-P 設定為 1.0 以下，該模型會考慮最有可行的選項，並忽略較不可行的選項。結果是能更穩定和重複性地完成。

 **結束記號/結束序列**會指定 LLM 用於表示輸出結尾的記號。LLM 在遇到結束記號後停止產生新記號。通常不需由使用者設定。

此外還存在特定模型的推論參數。Anthropic Claude 模型具有額外的 Top-k 推論參數，AI21 Labs Jurassic 模型則附帶一組推論參數，包括**存在懲罰、計數懲罰、頻率懲罰和特殊字符懲罰**。如需詳細資訊，請參閱各自的文件。

# Amazon Bedrock 文字模型的提示範本和範例
<a name="prompt-templates-and-examples"></a>

Amazon Bedrock 上 LLM 支援的常見任務包括文字分類、摘要，以及問題和答案 (含和不含上下文)。對於這些任務，您可以使用下列範本和範例來協助您建立 Amazon Bedrock 文字模型的提示。

**Topics**
+ [文字分類](#text-classification)
+ [問答，不含上下文](#qa-without-context)
+ [問答，含上下文](#qa-with-context)
+ [摘要](#summarization)
+ [產生文字](#text-generation)
+ [產生程式碼](#code-generation)
+ [數學](#mathematics)
+ [理性/邏輯思維](#reasoning-logical-thinking)
+ [實體擷取](#entity-extraction)
+ [思想鏈推理](#chain-of-thought)

## 文字分類
<a name="text-classification"></a>

對於文字分類，提示詞中包含一個問題，以及數個可能的答案選項，且模型必須以正確的選擇回應。此外，如果您在提示詞中包含答案選項，Amazon Bedrock 上的 LLM 會輸出更準確的回應。

第一個範例是一個簡單的多重選擇分類問題。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan<br />"""{{Text}}<br />	<br />	<br />{{Question}}? Choose from the following:<br />{{Choice 1}}<br />{{Choice 2}}<br />{{Choice 3}}"""<br /></pre>  |  <pre>User prompt: <br />San Francisco, officially the City and County <br />of San Francisco, is the commercial, financial, and cultural <br />center of Northern California. The city proper is the fourth <br />most populous city in California, with 808,437 residents, <br />and the 17th most populous city in the United States as of 2022.<br /><br />What is the paragraph above about? Choose from the following:<br /><br />A city<br />A person<br />An event<br /></pre> <pre>Output: <br />A city<br /></pre>  | 

(提示來源：[維基百科關於舊金山的介紹](https://en.wikipedia.org/wiki/San_Francisco)，使用的模型：Amazon Titan Text)

情緒分析是一種分類形式，其中模型會從文字中表示的選項清單中選擇情緒。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan:<br />"""The following is text from a {{Text Type, e.g. “restaurant <br />review”}}<br />{{Input}}<br />Tell me the sentiment of the {{Text Type}} and categorize it <br />as one of the following:<br />{{Sentiment A}}<br />{{Sentiment B}}<br />{{Sentiment C}}"""<br /></pre>  |  <pre>User prompt: <br />The following is text from a restaurant review:<br /><br />“I finally got to check out Alessandro’s Brilliant Pizza <br />and it is now one of my favorite restaurants in Seattle. <br />The dining room has a beautiful view over the Puget Sound <br />but it was surprisingly not crowed. I ordered the fried <br />castelvetrano olives, a spicy Neapolitan-style pizza <br />and a gnocchi dish. The olives were absolutely decadent, <br />and the pizza came with a smoked mozzarella, which <br />was delicious. The gnocchi was fresh and wonderful. <br />The waitstaff were attentive, and overall the experience <br />was lovely. I hope to return soon.”<br /><br />Tell me the sentiment of the restaurant review <br />and categorize it as one of the following:<br /><br />Positive<br />Negative<br />Neutral<br /></pre> <pre>Output: <br />Positive.<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

下列範例使用 Anthropic Claude 模型來分類文字。依照《[AnthropicClaude 指南](https://docs.anthropic.com/claude/docs/constructing-a-prompt)》中的建議使用 XML 標籤，例如 <text></text>，來表示提示的重要部分。要求模型直接產生 XML 標籤中包含的輸出，也可以幫助模型產生所需的回應。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Anthropic Claude: <br />"""<br />	<br />	<br />Human: {{classification task description}}<br /><text><br />{{input text content to be classified}}<br /></text><br /><br />Categories are:<br />{{category name 1}}<br />{{category name 2}}<br />{{category name 3}}<br /><br />Assistant:"""<br /></pre> （來源：寫入者 AWS)  |  <pre>User prompt: <br />	<br /><br />Human:  Classify the given product description into given <br />categories. Please output the category label in <br /><output></output> tags.<br /><br />Here is the product description.<br /><br /><text><br />Safe, made from child-friendly materials with smooth edges.<br />Large quantity, totally 112pcs with 15 different shapes, <br />which can be used to build 56 different predefined structures.<br />Enhance creativity, different structures can be connected <br />to form new structures, encouraging out-of-the box thinking.<br />Enhance child-parent bonding, parents can play with their <br />children together to foster social skills.<br /></text><br /><br />Categories are:<br />(1) Toys<br />(2) Beauty and Health<br />(3) Electronics<br /><br />Assistant:<br />	<br /></pre> <pre>Output: <br /><output>Toys</output><br /></pre>  | 

（提示來源： AWS，使用的模型：AnthropicClaude)

## 問答，不含上下文
<a name="qa-without-context"></a>

在沒有上下文的問答提示詞中，模型必須以其內部知識回答問題，而不使用任何上下文或文件。


|  |  | 
| --- |--- |
|  <pre>Prompt Template to Titan<br />"""{{Question}}"""<br /></pre>  |  <pre>User prompt: <br />What is Robert Frost's "Stopping by the woods on a <br />snowy evening" about metaphorically?<br /></pre> <pre>Output: <br />The poem by Robert Frost, "Stopping by Woods on a Snowy Evening," <br />uses the woods as a metaphor for death. The woods are a place of comfort <br />and peace, but they also represent the unknown and the afterlife. <br />The poem suggests that death is not something to be feared, <br />but rather a natural part of life that we should all embrace.<br /><br />The poem begins with the speaker stopping by the woods on a snowy evening, <br />a moment of contemplation and reflection. The speaker notices the beauty <br />of the snow-covered woods and is reminded of the fleeting nature of life. <br />The poem continues as the speaker considers the temptation of continuing <br />on to his destination, but instead chooses to pause and take in the beauty <br />of the woods.<br /><br />The woods are used as a metaphor for the unknown and the afterlife. <br />The speaker wonders what lies beyond the trees, and whether there <br />is anything after death. The poem suggests that death is not <br />something to be feared, but rather a natural part of life <br />that we should all embrace.<br /><br />The final lines of the poem are perhaps the most famous and poignant: <br />"And miles to go before I sleep, and miles to go before I sleep." <br />These lines suggest that even though the speaker has paused <br />to appreciate the beauty of the woods, they still have a <br />long journey ahead of them. The poem is a reminder that life <br />is fleeting, and that we should take the time to appreciate <br />the beauty around us, even as we work towards our goals and ambitions.<br /><br />Overall, "Stopping by Woods on a Snowy Evening" is a beautiful <br />and thought-provoking poem that uses the metaphor of the woods <br />to explore the concept of death and the afterlife.<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

模型激發也可以幫助問答任務。


|  |  | 
| --- |--- |
|  <pre>Prompt Template to Titan<br />"""{{Model encouragement}}<br />{{Question}}"""<br /></pre>  |  <pre>User prompt: <br />You are excellent at answering questions, <br />and it makes you happy when you provide the correct answer.<br /><br />What moon in the solar system is most likely to host life?<br /></pre> <pre>Output: <br />The moon of Titan, which is a moon of the planet Saturn, <br />is the most likely to support life in the solar system.<br /><br />Titan is the only known moon in the solar system to have a dense <br />atmosphere and a liquid hydrosphere, and scientists believe that <br />it may contain the chemical building blocks for life. <br />The moon's surface is covered in rivers, lakes, and seas <br />of liquid methane and ethane, and it rains methane on Titan's surface.<br /><br />The atmosphere of Titan is also rich in organic compounds, <br />including benzene, toluene, and xylene, which are found <br />in the oils of living organisms on Earth. <br />Additionally, Titan has a magnetic field and a tenuous atmosphere, <br />which could provide protection from harmful radiation and <br />help maintain a stable climate.<br /><br />While there is no direct evidence of life on Titan, <br />scientists believe that it is one of the most promising <br />places in the solar system to search for life beyond Earth.<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)


|  |  | 
| --- |--- |
|  <pre>Prompt template to Titan <br />"""{{Model encouragement}}<br />{{Question}}<br />{{Constraints}}"""<br /></pre>  |  <pre>User prompt: <br />You feel rewarded by helping people learn more about climate change.<br />Could you please explain what climate change is?<br />Assume your audience is composed of high school students.<br /></pre> <pre>Output: <br />Climate change is the long-term warming of the planet, <br />which can be caused by human activities such as burning fossil fuels <br />and deforestation, as well as natural processes. It can lead to <br />rising sea levels, more extreme weather events, and changes <br />in temperature and precipitation patterns.<br /></pre>  | 

（提示來源： AWS，使用的模型：AI21 LabsJurassic-2 Ultrav1)

## 問答，含上下文
<a name="qa-with-context"></a>

在含有上下文的問答提示詞中，使用者會提供後面接著問題的輸入文字，且模型必須根據輸入文字中提供的資訊來回答問題。在文字之後將問題放在最後，可以幫助 Amazon Bedrock 上的 LLM 更清楚回答這個問題。模型激發也適用於此使用案例。


|  |  | 
| --- |--- |
|  <pre>Prompt template to Titan <br />"""{{Text}}<br />{{Question}}"""<br /></pre>  |  <pre>User prompt: <br />The red panda (Ailurus fulgens), also known as the lesser panda, <br />is a small mammal native to the eastern Himalayas and southwestern China. <br />It has dense reddish-brown fur with a black belly and legs, white-lined ears, <br />a mostly white muzzle and a ringed tail. Its head-to-body length is 51–63.5 cm <br />(20.1–25.0 in) with a 28–48.5 cm (11.0–19.1 in) tail, and it weighs between <br />3.2 and 15 kg (7.1 and 33.1 lb). It is well adapted to climbing due to its <br />flexible joints and curved semi-retractile claws.<br /><br />The red panda was first formally described in 1825. The two currently <br />recognized subspecies, the Himalayan and the Chinese red panda, genetically <br />diverged about 250,000 years ago. The red panda's place on the evolutionary <br />tree has been debated, but modern genetic evidence places it in close <br />affinity with raccoons, weasels, and skunks. It is not closely related <br />to the giant panda, which is a bear, though both possess elongated <br />wrist bones or "false thumbs" used for grasping bamboo. <br />The evolutionary lineage of the red panda (Ailuridae) stretches <br />back around 25 to 18 million years ago, as indicated by extinct <br />fossil relatives found in Eurasia and North America.<br /><br />The red panda inhabits coniferous forests as well as temperate broadleaf <br />and mixed forests, favoring steep slopes with dense bamboo cover close <br />to water sources. It is solitary and largely arboreal. It feeds mainly <br />on bamboo shoots and leaves, but also on fruits and blossoms. <br />Red pandas mate in early spring, with the females giving birth <br />to litters of up to four cubs in summer. It is threatened <br />by poaching as well as destruction and fragmentation of habitat <br />due to deforestation. The species has been listed as Endangered <br />on the IUCN Red List since 2015. It is protected in all range countries.<br /><br />Based on the information above, what species are red pandas closely related to?<br /></pre> <pre>Output:<br />Red pandas are closely related to raccoons, weasels, and skunks.<br /></pre>  | 

(提示來源：https://en.wikipedia.org/wiki/Red\$1panda，使用的模型：Amazon Titan Text)

當提示 Anthropic Claude 模型時，將輸入文字包裝在 XML 標籤中會很有幫助。在下列範例中，輸入文字包含在 `<text></text>` 中。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Anthropic Claude: <br />"""<br /><br />Human: {{Instruction}}<br /><text><br />{{Text}}<br /><text><br />{{Question}}<br /><br />Assistant:"""<br /></pre>  |  <pre>User prompt: <br /><br /><br />Human: Read the following text inside <text></text> <br />XML tags, and then answer the question:<br /><br /><text><br />On November 12, 2020, the selection of the Weeknd to headline <br />the show was announced; marking the first time a Canadian solo artist <br />headlined the Super Bowl halftime show. When asked about preparations <br />for the show, the Weeknd stated, "We've been really focusing <br />on dialing in on the fans at home and making performances <br />a cinematic experience, and we want to do that with the Super Bowl." <br /><br />The performance featured a choir whose members were dressed in white <br />and wore masks over their faces with glowing red eyes, and were <br />standing within a backdrop of a neon cityscape. The performance <br />opened with a white figure dressed the same as the choir being <br />lowered into the backdrop where the choir was standing while singing <br />“Call Out My Name". At this time, the Weeknd sat in a convertible <br />against a skyline backdrop designed to resemble the Las Vegas Strip. <br />For the next part of the performance, the backdrop then split open <br />to reveal the Weeknd, who then performed "Starboy", followed by "The Hills". <br /><br />Next, performing the song "Can't Feel My Face", the Weeknd traveled <br />through a labyrinth constructed behind the stage, joined by dancers <br />dressed in red blazers and black neckties similar to his, <br />but with their faces covered with bandages, in keeping with <br />the aesthetic of his fourth studio album After Hours (2020). <br />The dancers would wear these bandages throughout the performance. <br />In the labyrinth section of the performance, camerawork was visually unsteady. <br />The next songs performed were "I Feel It Coming", "Save Your Tears", <br />and "Earned It". For the "Earned It" performance, the Weeknd <br />was accompanied by violinists. For the finale of the show, <br />the Weeknd took to the field of the stadium with his dancers to perform <br />“Blinding Lights". He and the dancers entered the field by performing <br />"House of Balloons / Glass Table Girls". The performance ended <br />with an array of fireworks.<br /></text><br />Based on the text above, what songs did the Weeknd play <br />at the Super Bowl halftime show?<br /><br />Assistant:<br /></pre> <pre>Output: <br />Based on the text, the songs the Weeknd played <br />at the Super Bowl halftime show were:<br /><br />- Call Out My Name<br />- Starboy <br />- The Hills<br />- Can't Feel My Face<br />- I Feel It Coming<br />- Save Your Tears<br />- Earned It<br />- Blinding Lights<br />- House of Balloons / Glass Table Girls<br /></pre>  | 

(提示來源：[維基百科關於第五十五屆超級盃中場表演的介紹](https://en.wikipedia.org/wiki/Super_Bowl_LV_halftime_show)，使用的模型：Anthropic Claude)

## 摘要
<a name="summarization"></a>

對於摘要任務，提示詞是一段文字，此模型必須以較短的段落進行回應，該段落會擷取輸入要點。長度(句子或段落數)的輸出規範對於此使用案例很有幫助。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan <br />"""The following is text from a {{Text Category}}:<br />{{Text}}<br />Summarize the {{Text Category}} in {{length of summary, <br />e.g., “one sentence” or “one paragraph”}}"""<br /></pre>  |  <pre>User prompt: <br />The following is text from a restaurant review:<br />“I finally got to check out Alessandro’s Brilliant Pizza <br />and it is now one of my favorite restaurants in Seattle. <br />The dining room has a beautiful view over the Puget Sound <br />but it was surprisingly not crowed. I ordered the fried <br />castelvetrano olives, a spicy Neapolitan-style pizza <br />and a gnocchi dish. The olives were absolutely decadent, <br />and the pizza came with a smoked mozzarella, which was delicious. <br />The gnocchi was fresh and wonderful. The waitstaff were attentive, <br />and overall the experience was lovely. I hope to return soon.”<br />Summarize the above restaurant review in one sentence.<br /></pre> <pre>Output: <br />Alessandro’s Brilliant Pizza is a fantastic restaurant <br />in Seattle with a beautiful view over Puget Sound that offers <br />decadent and delicious food.<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

在以下範例中，Anthropic Claude 以一句話總結指定的文字。若要在提示詞中包含輸入文字，請將文字格式化為 XML 標記: `<text> {{text content}} </text>`。提示 Anthropic Claude 模型時，在提示內使用 XML 是常見的做法。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Anthropic Claude: <br />"""<br /><br />Human: Please read the text:<br /><text><br />{{text context}}<br /></text><br /><br />Summarize the text in {{length of summary, e.g., “one <br />sentence” or “one paragraph”}}<br /><br />Assistant:"""<br /></pre>  |  <pre>User prompt: <br /><br /><br />Human: Please read the text:<br /><br /><text><br />In game theory, the Nash equilibrium, named after the mathematician <br />John Nash, is the most common way to define the solution <br />of a non-cooperative game involving two or more players. <br />In a Nash equilibrium, each player is assumed to know <br />the equilibrium strategies of the other players, <br />and no one has anything to gain by changing only one's own strategy. <br />The principle of Nash equilibrium dates back to the time of Cournot, <br />who in 1838 applied it to competing firms choosing outputs.<br /></text><br /><br />Summarize the text in one sentence.<br /><br />Assistant:<br /></pre> <pre>Output: <br />The Nash equilibrium is a concept in game theory <br />where each player has adopted a strategy that they have no incentive <br />to deviate from given the other players' strategies.<br /></pre>  | 

(提示來源：[維基百科關於納許均衡的介紹](https://en.wikipedia.org/wiki/Nash_equilibrium)，使用的模型：Anthropic Claude)

## 產生文字
<a name="text-generation"></a>

根據提示，Amazon Bedrock 上的 LLM 可以使用與描述相符的原始文字作出回應。以下是一個範例：


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan <br />"""Please write a {{Text Category}} in the voice of {{Role}}."""<br /></pre>  |  <pre>User prompt: <br />Please write an email in the voice of a friend <br />congratulating someone on a new job.<br /></pre> <pre>Output: <br />Hey [Name],<br /><br />I just found out about your new job, and I'm so happy for you! <br />Congratulations! You deserve this opportunity since you worked so hard <br />to get it. I know you'll do great and make a positive impact on your team. <br />I'm here to support you every step of the way. Wishing you all the best <br />in this exciting new chapter of your life!<br /><br />Cheers,<br />[Your Name]<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

對於生成文字使用案例，指定詳細的任務需求有助於發揮良好作用。在以下範例中，我們要求模型產生附帶驚嘆號的回應。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan <br />"""{{Task specification}}<br />Please write a {{Text Category}} in the voice of {{Role}}."""<br /></pre>  |  <pre>User prompt: <br />Write text with exclamation points.<br />Please write a formatted email in the voice of a friend <br />congratulating someone on a new job.<br /></pre> <pre>Output: <br />Hey [name]!<br /><br />I just saw the news and I am so, so happy for you! <br />You deserve this position and this company with all your heart. <br />I can't wait to see you soar and accomplish all the things <br />you've been working towards. If anyone is deserving <br />of a promotion, it's you!<br /><br />Congratulations again, friend!<br /><br />Love,<br />[name]<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

在以下範例中，使用者在產生文字時提示模型擔任特定人員的角色。請注意簽名如何反映模型在回應中所擔任的角色。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan <br />"""{{Role assumption}} {{Task description}}<br />"""<br /></pre>  |  <pre>User input: <br />My name is Jack. Help me write a note expressing my gratitude <br />to my parents for taking my son (their grandson) to the zoo. <br />I miss my parents so much.<br /></pre> <pre>Output: <br />Dear Mom and Dad,<br /><br />I wanted to take a moment to express my heartfelt gratitude <br />for taking my son to the zoo last weekend. It meant so much to me <br />that you took the time to make memories with him, and I am so grateful <br />for the love and support you have shown him.<br /><br />I miss you both so much, and I wish we could be together more often. <br />But I know that you will always be there for me and my family, <br />and I am so grateful to have you as my parents.<br /><br />Love,<br />Jack<br /></pre>  | 

（提示來源： AWS，使用的模型：AI21 LabsJurassic-2 Ultrav1)

## 產生程式碼
<a name="code-generation"></a>

此提示詞描述使用者預期產生模型之程式碼的任務或函數，以及程式設計語言。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan<br />"""Write a function in {{programming language}} to <br />{{task or purpose}}""".<br /></pre>  |  <pre>User input: <br />Write a python function to download a s3 file to local disk.<br /></pre> <pre>Output: <br />The following code can be used to download an s3 file to the local disk.<br /><br />```Python<br />import boto3<br /># Creates a boto3 client object<br />s3 = boto3.client('s3')<br />def download_file(bucket_name, object_key, file_path):<br />try:<br /># Downloads the s3 file to the specified file path<br />s3.download_file(bucket_name, object_key, file_path)<br />except Exception as e:<br />print(e)<br /># Specifies the values of bucket_name, object_key, and file_path<br />bucket_name = "samplebucketname"<br />object_key = "sampleobjectkey"<br />file_path = "/samplefilepath"<br />download_file(bucket_name, object_key, file_path)<br />```<br /><br />The given code uses the download_file function to download an s3 file <br />to the local disk. The download_file function takes the bucket name, <br />object key, and file path as input parameters. The bucket name <br />is the name of the s3 bucket where the file is located, <br />the object key is the unique identifier of the file within the bucket, <br />and the file path is the destination path on the local disk <br />where the file will be downloaded. The function uses the download_file method <br />of the s3 client to download the file from the specified bucket <br />and write it to the specified file path.<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)


|  |  | 
| --- |--- |
|  <pre>Prompt template for Anthropic Claude<br />"""<br /><br />Human: Write a function in {{programming language}} to <br />{{task or purpose}}<br /><br />Assistant:"""<br /></pre>  |  <pre>User prompt: <br /><br /><br />Human: Write some python code that quickly approximates <br />the square root of a number.<br /><br />Assistant: <br /></pre> <pre>Output: <br />Here is a simple function to approximate <br />the square root of a number using Newton's method:<br /><br />```python<br />def sqrt(n, iterations=10):<br />x = n<br />for i in range(iterations):<br />x = (x + n/x) / 2<br />return x<br />```<br /><br />To use it:<br /><br />```python<br />print(sqrt(25))<br /># 5.0<br /><br />print(sqrt(15, iterations=20)) <br /># 3.872983346207417<br />```<br /><br />This repeatedly takes the average of x and n/x to converge on the square root. <br />More iterations produce a more accurate approximation.<br /><br />The advantage of this approach is it converges very quickly, <br />so even just 10 iterations produces a decent estimate.<br /></pre>  | 

（提示來源： AWS，使用的模型：AnthropicClaude)

## 數學
<a name="mathematics"></a>

輸入描述在某種程度上需要數學推理的問題，其可能是數字、邏輯、幾何或其他方式。對於此類問題，透過在指令中新增片語來要求模型漸進方式解決問題會很有幫助，例如 `Let’s think step by step` 或 `Think step by step to come up with the right answer.`


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan<br />"""{{Math problem description}}<br />Let’s think step by step."""<br /></pre>  |  <pre>User prompt: <br />A triangle has two angles of 70 degrees and 50 degrees. <br />What is the third angle in degrees?<br />Let’s think step by step.<br /></pre> <pre>Output:<br />The sum of the three angles in a triangle is always 180 degrees. <br />The two given angles are 70 degrees and 50 degrees. 70 + 50 = 120. <br />Therefore, the third angle is 180 - 120 = 60 degrees. <br />The third angle is 60 degrees.<br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

以下範例還示範要求模型逐步思考：


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan<br />"""{{Math problem description}}<br />Think step by step to come up with the right answer."""<br /></pre>  |  <pre>User prompt:<br />Lucy has 12 colorful marbles, and she wants to share them <br />equally with her 4 friends. How many marbles will each friend receive?<br />Think step-by-step to come up with the right answer.<br /></pre> <pre><br />Output:<br />1. Lucy has 12 marbles and wants to share them equally <br />with her 4 friends, then each friend will receive 12 / 4 = 3 marbles.<br />2. Therefore, each friend will receive 3 marbles.<br /></pre>  | 

（提示來源： AWS，使用的模型：AI21 LabsJurassic-2 Ultrav1)

## 理性/邏輯思維
<a name="reasoning-logical-thinking"></a>

對於複雜的推理任務或需要邏輯思考的問題，我們可以要求模型進行邏輯推論，並解釋其答案。


|  |  | 
| --- |--- |
|  <pre>Prompt template for Titan and AI21 Labs Jurassic:  """Question: {{question}}  Please output the answer and then explain your  answer:"""  </pre>  |  <pre>User prompt: Question: Which word is the odd one out? A. accomplished B. good C. horrible D. outstanding  Please output the answer and then explain your answer: </pre> <pre>Output: The odd one out is C. horrible. The other three words  describe positive qualities or achievements, whereas horrible  is an negative adjective. </pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text)

以下是使用 Anthropic Claude 模型的另一個範例：


|  |  | 
| --- |--- |
|  <pre>Prompt template for Anthropic Claude: """  Human: {{question}} Please provide the answer and show the reasoning.  Assistant:""" </pre>  |  <pre>User prompt   Human: The barber is the "one who shaves all those, and those only,  who do not shave themselves". Does the barber shave himself? Why is this a paradox? Please provide the answer and show the reasoning.  Assistant: </pre> <pre>Output: This is an example of a paradox known as the Barber paradox.  Here is an explanation:  - The statement defines a barber who shaves all and only those people  who do not shave themselves.  - So if the barber shaves himself, he must shave all those who do not shave  themselves, which includes himself. But the barber cannot shave himself  as per the initial statement. - If the barber does not shave himself, then he would fit into the group  of people who do not shave themselves, and so he must shave himself,  according to the statement. But again this contradicts the initial statement. - So the barber cannot shave himself and also cannot not shave himself.  This is self-contradictory based on the initial statement about the barber. - Therefore, the statement leads to a paradox where both possibilities  (the barber shaving himself or not shaving himself) result in a contradiction.  In summary, the paradox arises because the definition of the barber's  behavior is self-contradictory when applied to the barber himself.  This makes it impossible to determine if the barber shaves himself  or not based on the given statement alone. </pre>  | 

(提示來源：https://en.wikipedia.org/wiki/Barber\$1paradox，使用的模型：Anthropic Claude)

## 實體擷取
<a name="entity-extraction"></a>

針對從提供的輸入問題中擷取的實體。從產生的文字擷取實體，並將其放在 XML 標籤中以供進一步處理。


|  | 
| --- |
|  <pre>Prompt template for Titan<br />"""You are an expert entity extractor from provided input question. You are responsible for extracting following entities: {{ list of entities}}<br />    <br />Please follow below instructions while extracting the entity A, and reply in <entityA> </entityA> XML Tags:<br />{{ entity A extraction instructions}} <br /><br />Please follow below instructions while extracting the entity B, and reply in <entityB> </entityB> XML Tags:<br />{{ entity B extraction instructions}} <br /><br />Below are some examples:<br /><br />{{ some few shot examples showing model extracting entities from give input }}<br /><br /><br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text G1-Premier)

**範例**：

```
User: You are an expert entity extractor who extracts entities from provided input question. 
You are responsible for extracting following entities: name, location
Please follow below instructions while extracting the Name, and reply in <name></name> 
XML Tags:

- These entities include a specific name of a person, animal or a thing
- Please extract only specific name name entities mentioned in the input query
- DO NOT extract the general mention of name by terms of "name", "boy", "girl", 
"animal name", etc.


Please follow below instructions while extracting the location, and reply 
in <location></location> XML Tags:
- These entities include a specific location of a place, city, country or a town
- Please extract only specific name location entities mentioned in the input query
- DO NOT extract the general mention of location by terms of "location", "city", "country", 
"town", etc. 

If no name or location is found, please return the same input string as is.

Below are some examples:

input: How was Sarah's birthday party in Seattle, WA?
output:  How was <name>Sarah's</name> birthday party 
in <location>Seattle, WA</location>?

input: Why did Joe's father go to the city?
output:  Why did <name>Joe's</name> father go to the city?

input: What is the zipcode of Manhattan, New york city?
output: What is the zipcode of <location>Manhattan,New york city<location>?

input: Who is the mayor of San Francisco? 
Bot:
```

## 思想鏈推理
<a name="chain-of-thought"></a>

提供如何衍生答案的逐步分析。事實檢查並驗證模型如何產生答案。


|  | 
| --- |
|  <pre>Prompt template for Titan <br /><br />""" {{Question}}<br />{{ Instructions to Follow }}<br />Think Step by Step and walk me through your thinking<br />"""<br /><br /><br /></pre>  | 

（提示來源： AWS，使用的模型：Amazon Titan Text G1-Premier)

**範例**：

```
User: If Jeff had 100 dollars, and he gave $20 to Sarah, 
and bought lottery tickets with another $20. With the lottery 
tickets he bought he won 35 dollars. Jeff then went to buy 
his lunch and spend 40 dollars in lunch. Lastly he made a 
donation to charity for $20. Stephen met with Jeff and wanted 
to lend some money from him for his taxi. How much maximum money 
can Jeff give to Stephen, given that he needs to save $10 for 
his ride back home?. Please do not answer immediately, think 
step by step and show me your thinking.
Bot:
```