

# Construct and store reusable prompts with Prompt management in Amazon Bedrock
Prompt management

Amazon Bedrock provides you the ability to create and save your own prompts using Prompt management so that you can save time by applying the same prompt to different workflows. When you create a prompt, you can select a model to run inference on it and modify the inference parameters to use. You can include variables in the prompt so that you can adjust the prompt for different use case.

When you test your prompt, you have the option of comparing different variants of the prompt and choosing the variant that yields outputs that are best-suited for your use case. While iterating on your prompt, you can save versions of it. You integrate a prompt into your application with the help of [Amazon Bedrock Flows](flows.md).

The following is the general workflow for using Prompt management:

1. Create a prompt in Prompt management that you want to reuse across different use cases. Include variables to provide flexibility in the model prompt.

1. Choose a model, inference profile, or agent to run inference on the prompt and modify the inference configurations as necessary.

1. Fill in test values for the variables and run the prompt. You can create variants of your prompt and compare the outputs of different variants to choose the best one for your use case.

1. Integrate the prompt into your application in one of the following ways:
   + Specify the prompt when [running model inference](inference.md).
   + Add a prompt node to a [flow](flows.md) and specify the prompt.

**Topics**
+ [

## Key definitions
](#prompt-management-definitions)
+ [

# Supported Regions and models for Prompt management
](prompt-management-supported.md)
+ [

# Prerequisites for prompt management
](prompt-management-prereq.md)
+ [

# Create a prompt using Prompt management
](prompt-management-create.md)
+ [

# View information about prompts using Prompt management
](prompt-management-view.md)
+ [

# Modify a prompt using Prompt management
](prompt-management-modify.md)
+ [

# Test a prompt using Prompt management
](prompt-management-test.md)
+ [

# Optimize a prompt
](prompt-management-optimize.md)
+ [

# Deploy a prompt to your application using versions in Prompt management
](prompt-management-deploy.md)
+ [

# Delete a prompt in Prompt management
](prompt-management-delete.md)
+ [

# Run Prompt management code samples
](prompt-management-code-ex.md)

## Key definitions


The following list introduces you to the basic concepts of Prompt management:
+ **Prompt** – An input provided to a model to guide it to generate an appropriate response or output.
+ **Variable** – A placeholder that you can include in the prompt. You can include values for each variable when testing the prompt or when you invoke the model at runtime.
+ **Prompt variant** – An alternative configuration of the prompt, including its message or the model or inference configurations used. You can create different variants of a prompt, test them, and save the variant that you want to keep.
+ **Prompt builder** – A tool in the Amazon Bedrock console that lets you create, edit, and test prompts and their variants in a visual interface.

# Supported Regions and models for Prompt management
Supported Regions and models

Prompt management is supported in the following AWS Regions:
+ ap-northeast-1
+ ap-northeast-2
+ ap-northeast-3
+ ap-south-1
+ ap-south-2
+ ap-southeast-1
+ ap-southeast-2
+ ca-central-1
+ eu-central-1
+ eu-central-2
+ eu-north-1
+ eu-south-1
+ eu-south-2
+ eu-west-1
+ eu-west-2
+ eu-west-3
+ sa-east-1
+ us-east-1
+ us-east-2
+ us-gov-east-1
+ us-gov-west-1
+ us-west-2

You can use Prompt management with any text model supported for the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API. For a list of supported models, see [Supported models and model features](conversation-inference-supported-models-features.md).

**Note**  
[InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) only work on prompts from Prompt management whose configuration specifies an Anthropic Claude or Meta Llama model.

# Prerequisites for prompt management
Prerequisites

For a role to use prompt management, you need to allow it to perform a certain set of API actions. Review the following prerequisites and fulfill the ones that apply to your use case:

1. If your role has the [AmazonBedrockFullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonBedrockFullAccess) AWS managed policy attached, you can skip this section. Otherwise, follow the steps at [Update the permissions policy for a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-permissions.html#id_roles_update-role-permissions-policy) and attach the following policy to a role to provide permissions to perform actions related to Prompt management:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "PromptManagementPermissions",
               "Effect": "Allow",
               "Action": [
                   "bedrock:CreatePrompt",
                   "bedrock:UpdatePrompt",
                   "bedrock:GetPrompt",
                   "bedrock:ListPrompts",
                   "bedrock:DeletePrompt",
                   "bedrock:CreatePromptVersion",
                   "bedrock:OptimizePrompt",
                   "bedrock:GetFoundationModel",
                   "bedrock:ListFoundationModels",
                   "bedrock:GetInferenceProfile",
                   "bedrock:ListInferenceProfiles",
                   "bedrock:InvokeModel",
                   "bedrock:InvokeModelWithResponseStream",
                   "bedrock:RenderPrompt",
                   "bedrock:TagResource",
                   "bedrock:UntagResource",
                   "bedrock:ListTagsForResource"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

   To further restrict permissions, you can omit actions, or you can specify resources and condition keys by which to filter permissions. For more information about actions, resources, and condition keys, see the following topics in the *Service Authorization Reference*:
   + [Actions defined by Amazon Bedrock](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-actions-as-permissions) – Learn about actions, the resource types that you can scope them to in the `Resource` field, and the condition keys that you can filter permissions on in the `Condition` field.
   + [Resource types defined by Amazon Bedrock](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-resources-for-iam-policies) – Learn about the resource types in Amazon Bedrock.
   + [Condition keys for Amazon Bedrock](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-policy-keys) – Learn about the condition keys in Amazon Bedrock.
**Note**  
If you plan to deploy your prompt using the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API, see [Prerequisites for running model inference](inference-prereq.md) to learn about the permissions that you must set up to invoke a prompt.
If you plan to use a [flow](flows.md) in Amazon Bedrock Flows to deploy your prompt, see [Prerequisites for Amazon Bedrock Flows](flows-prereq.md) to learn about the permissions that you must set up to create a flow.

1. If you plan to encrypt your prompt with a customer managed key rather than using an AWS managed key (for more information, see [AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html)), create the following policies:

   1. Follow the steps at [Creating a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-overview.html) and attach the following key policy to a KMS key to allow Amazon Bedrock encrypt and decrypt a prompt with the key, replacing the *values* as necessary. The policy contains optional condition keys (see [Condition keys for Amazon Bedrock](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-policy-keys) and [AWS global condition context keys](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-policy-keys)) in the `Condition` field that we recommend you use as a security best practice.

      ```
      {
          "Sid": "EncryptFlowKMS",
          "Effect": "Allow",
          "Principal": {
              "Service": "bedrock.amazonaws.com"
          },
          "Action": [
              "kms:GenerateDataKey",
              "kms:Decrypt"
          ],
          "Resource": "*",
          "Condition": {
              "StringEquals": {
                  "kms:EncryptionContext:aws:bedrock-prompts:arn": "arn:${partition}:bedrock:${region}:${account-id}:prompt/${prompt-id}"
              }
          }
      }
      ```

   1. Follow the steps at [Update the permissions policy for a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-permissions.html#id_roles_update-role-permissions-policy) and attach the following policy to the prompt management role, replacing the *values* as necessary, to allow it to generate and decrypt the customer managed key for a prompt. The policy contains optional condition keys (see [Condition keys for Amazon Bedrock](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-policy-keys) and [AWS global condition context keys](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-policy-keys)) in the `Condition` field that we recommend you use as a security best practice.

      ```
      {
          "Sid": "KMSPermissions",
          "Effect": "Allow",
          "Action": [
              "kms:GenerateDataKey",
              "kms:Decrypt"
          ],
          "Resource": [
              "arn:aws:kms:${region}:${account-id}:key/${key-id}"
          ],
           "Condition": {
              "StringEquals": {
                  "aws:ResourceAccount": "${account-id}"
              }
          }
      }
      ```

# Create a prompt using Prompt management
Create a prompt

When you create a prompt, you have the following options:
+ Write the prompt message that serves as input for an FM to generate an output.
+ Use double curly braces to include variables (as in *\$1\$1variable\$1\$1*) in the prompt message that can be filled in when you call the prompt.
+ Choose a model with which to invoke the prompt or, if you plan to use the prompt with an agent, leave it unspecified. If you choose a model, you can also modify the inference configurations to use. To see inference parameters for different models, see [Inference request parameters and response fields for foundation models](model-parameters.md).

All prompts support the following base inference parameters:
+ **maxTokens** – The maximum number of tokens to allow in the generated response. 
+ **stopSequences** – A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response. 
+ **temperature** – The likelihood of the model selecting higher-probability options while generating a response. 
+ **topP** – The percentage of most-likely candidates that the model considers for the next token.

If a model supports additional inference parameters, you can specify them as *additional fields* for your prompt. You supply the additional fields in a JSON object. The following example shows how to set `top_k`, which is available in Anthropic Claude models, but isn't a base inference parameter. 

```
{
    "top_k": 200
}
```

For information about model inference parameters, see [Inference request parameters and response fields for foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html).

Setting a base inference parameter as an additional field doesn't override the value that you set in the console.

If the model that you choose for the prompt supports the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API (for more information, see [Carry out a conversation with the Converse API operations](conversation-inference.md)), you can include the following when constructing the prompt:
+ A system prompt to provide instructions or context to the model.
+ Previous prompts (user messages) and model responses (assistant messages) as conversational history for the model to consider when generating a response for the final user message.
+ (If supported by the model) [Tools](tool-use.md) for the model to use when generating the response.
+ (If supported by the model) Use [Prompt caching](prompt-caching.md) to reduce costs by caching large or frequently used prompts. Depending on the model, you can cache system instructions, tools, and messages (user and assistant). Prompt caching creates a cache checkpoint for the prompt if your total prompt prefix meets the minimum number of tokens that the model requires. When a changed variable is encountered in a prompt, prompt caching creates a new cache checkpoint (if the number of input tokens reaches the minimum that the model requires).

To learn how to create a prompt using Prompt management, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To create a prompt**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose **Create prompt**.

1. Provide a name for the prompt and an optional description.

1. To encrypt your prompt with a customer managed key, select **Customize encryption settings (advanced)** in the **KMS key selection** section. If you omit this field, your prompt will be encrypted with an AWS managed key. For more information, see [AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html).

1. Choose **Create prompt**. Your prompt is created and you'll be taken to the **Prompt builder** for your newly created prompt, where you can configure your prompt.

1. You can continue to the following procedure to configure your prompt or return to the prompt builder later.

**To configure your prompt**

1. If you're not already in the prompt builder, do the following:

   1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

   1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

   1. In the **Prompt draft** section, choose **Edit in prompt builder**.

1. Use the **Prompt** pane to construct the prompt. Enter the prompt in the last **User message** box. If the model supports the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API or the [AnthropicClaude Messages API](model-parameters-anthropic-claude-messages.md), you can also include a **System prompt** and previous **User messages** and **Assistant messages** for context.

   When you write a prompt, you can include variables in double curly braces (as in *\$1\$1variable\$1\$1*). Each variable that you include appears in the **Test variables** section.

1. (Optional) You can modify your prompt in the following ways:
   + In the **Configurations** pane, do the following:

     1. Choose a **Generative AI resource** for running inference.
**Note**  
If you choose an agent, you can only test the prompt in the console. To learn how to test a prompt with an agent in the API, see [Test a prompt using Prompt management](prompt-management-test.md).

     1. In **Inference parameters** set the inference parameters that you want to use. 

     1. If the model supports [reasoning](inference-reasoning.md), turn on **Reasoning** to include the model's reasoning in its response. In **Reasoning tokens**, you can configure the number of reasoning tokens that the model can use. 

     1. In **Additional model request fields**, choose **Configure** to specify additional inference parameters, beyond those in **Inference parameters**. 

     1. If the model that you choose supports tools, choose **Configure tools** to use tools with the prompt.

     1. If the model that you choose supports [prompt caching](prompt-caching.md), choose one of the following options (availability varies by model):
        + **None** – No prompt caching is done.
        + **Tools** – Only tools in the prompt are cached.
        + **Tools, system instructions** – Tools and system instructions in the prompt are cached.
        + **Tools, system instructions, and messages** – Tools, system instructions, and messages (user and assistant) in the prompt are cached.
   + To compare different variants of your prompt, choose **Compare variants**. You can do the following on the comparison page:
     + To add a variant, choose the plus sign. You can add up to three variants.
     + After you specify the details of a variant, you can specify any **Test variables** and choose **Run** to test the output of the variant.
     + To delete a variant, choose the three dots and select **Remove from compare**.
     + To replace the working draft and leave the comparison mode, choose **Save as draft**. All the other variants will be deleted.
     + To leave the comparison mode, choose **Exit compare mode**.

1. You have the following options when you're finished configuring the prompt:
   + To save your prompt, choose **Save draft**. For more information about the draft version, see [Deploy a prompt to your application using versions in Prompt management](prompt-management-deploy.md).
   + To delete your prompt, choose **Delete**. For more information, see [Delete a prompt in Prompt management](prompt-management-delete.md).
   + To create a version of your prompt, choose **Create version**. For more information about prompt versioning, see [Deploy a prompt to your application using versions in Prompt management](prompt-management-deploy.md).

------
#### [ API ]

To create a prompt, send a [CreatePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreatePrompt.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt).

The following fields are required:


****  

| Field | Brief description | 
| --- | --- | 
| name | A name for the prompt. | 
| variants | A list of different configurations for the prompt (see below). | 
| defaultVariant | The name of the default variant. | 

Each variant in the `variants` list is a [PromptVariant](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PromptVariant.html) object of the following general structure:

```
{
        "name": "string",
        # modelId or genAiResource (see below)
        "templateType": "TEXT",
        "templateConfiguration": # see below,
        "inferenceConfiguration": {
            "text": {
                "maxTokens": int,
                "stopSequences": ["string", ...],
                "temperature": float,
                "topP": float
            }
        },
        "additionalModelRequestFields": {
            "key": "value",
            ...
        },
        "metadata": [
            {
                "key": "string",
                "value": "string"
            },
            ...
        ]
}
```

Fill in the fields as follows:
+ name – Enter a name for the variant.
+ Include one of these fields, depending on the model invocation resource to use:
  + modelId – To specify a [foundation model](models-supported.md) or [inference profile](cross-region-inference.md) to use with the prompt, enter its ARN or ID.
  + genAiResource – To specify an [agent](agents.md), enter its ID or ARN. The value of the `genAiResource` is a JSON object of the following format:

    ```
    {
        "genAiResource": {
        "agent": {
            "agentIdentifier": "string"
        }   
    }
    ```
**Note**  
If you include the `genAiResource` field, you can only test the prompt in the console. To test a prompt with an agent in the API, you must enter the text of the prompt directly into the `inputText` field of the [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html) request.
+ templateType – Enter `TEXT` or `CHAT`. `CHAT` is only compatible with models that support the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API. If you want to use prompt caching, you must use the `CHAT` template type.
+ templateConfiguration – The value depends on the template type that you specified:
  + If you specified `TEXT` as the template type, the value should be a [TextPromptTemplateConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_TextPromptTemplateConfiguration.html.html) JSON object.
  + If you specified `CHAT` as the template type, the value should be a [ChatPromptTemplateConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ChatPromptTemplateConfiguration.html.html) JSON object.
+ inferenceConfiguration – The `text` field maps to a [PromptModelInferenceConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PromptModelInferenceConfiguration.html.html). This field contains inference parameters that are common to all models. To learn more about inference parameters, see [Influence response generation with inference parameters](inference-parameters.md).
+ additionalModelRequestFields – Use this field to specify inference parameters that are specific to the model that you're running inference with. To learn more about model-specific inference parameters, see [Inference request parameters and response fields for foundation models](model-parameters.md).
+ metadata – Metadata to associate with the prompt variant. You can append key-value pairs to the array to tag the prompt variant with metadata.

The following fields are optional:


****  

| Field | Use case | 
| --- | --- | 
| description | To provide a description for the prompt. | 
| clientToken | To ensure the API request completes only once. For more information, see [Ensuring idempotency](https://docs.aws.amazon.com/ec2/latest/devguide/ec2-api-idempotency.html). | 
| tags | To associate tags with the flow. For more information, see [Tagging Amazon Bedrock resources](tagging.md). | 

The response creates a `DRAFT` version and returns an ID and ARN that you can use as a prompt identifier for other prompt-related API requests.

------

# View information about prompts using Prompt management
View information about prompts

To learn how to view information about prompts using Prompt management, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To view information about a prompt**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

1. The **Prompt details** page includes the following sections:
   + **Overview** – Contains general information about the prompt and when it was created and last updated.
   + **Prompt draft** – Contains the prompt message and configurations for the latest saved draft version of the prompt.
   + **Prompt versions** – A list of all versions of the prompt that have been created. For more information about prompt versions, see [Deploy a prompt to your application using versions in Prompt management](prompt-management-deploy.md).

------
#### [ API ]

To get information about a prompt, send a [GetPrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetPrompt.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the prompt as the `promptIdentifier`. To get information about a specific version of the prompt, specify `DRAFT` or the version number in the `promptVersion` field.

To list information about your agents, send a [ListPrompts](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListPrompts.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). You can specify the following optional parameters:


****  

| Field | Short description | 
| --- | --- | 
| maxResults | The maximum number of results to return in a response. | 
| nextToken | If there are more results than the number you specified in the maxResults field, the response returns a nextToken value. To see the next batch of results, send the nextToken value in another request. | 

------

# Modify a prompt using Prompt management
Modify a prompt

To learn how to modify prompts using Prompt management, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To modify a prompt**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

1. To edit the **Name** or **Description** of the prompt, choose **Edit** in the **Overview** section. After you make your edits, choose **Save**.

1. To modify the prompt and its configurations, choose **Edit in prompt builder**

1. To learn about the parts of the prompt that you can modify, see [Create a prompt using Prompt management](prompt-management-create.md).

------
#### [ API ]

To modify a prompt, send an [UpdatePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdatePrompt.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). Include both fields that you want to maintain and fields that you want to change.

------

# Test a prompt using Prompt management
Test a prompt

To learn how to test a prompt you created in Prompt management, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To test a prompt in Prompt management**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

1. Choose **Edit in Prompt builder** in the **Prompt draft** section, or choose a version of the prompt in the **Versions** section.

1. (Optional) To provide values for variables in your prompt, you need to first select a model in the **Configurations** pane. Then, enter a **Test value** for each variable in the **Test variables** pane.
**Note**  
These test values are temporary and aren't saved if you save your prompt.

1. To test your prompt, choose **Run** in the **Test window** pane.

1. Modify your prompt or its configurations and then run your prompt again as necessary. If you're satisfied with your prompt, you can choose **Create version** to create a snapshot of your prompt that can be used in production. For more information, see [Deploy a prompt to your application using versions in Prompt management](prompt-management-deploy.md).

You can also test the prompt in the following ways:
+ To test the prompt in a flow, include a prompt node in the flow. For more information, see [Create and design a flow in Amazon Bedrock](flows-create.md) and [Node types for your flow](flows-nodes.md).
+ If didn't configure your prompt with an agent, you can still test the prompt with an agent by importing it when testing an agent. For more information, see [Test and troubleshoot agent behavior](agents-test.md).

------
#### [ API ]

You can test your prompt in the following ways:
+ To run inference on the prompt, send an [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html), [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html), or [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html) request with an [Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-rt) and specify the ARN of the prompt in the `modelId` parameter.
**Note**  
The following restrictions apply when you use a Prompt management prompt with `Converse` or `ConverseStream`:  
You can't include the `additionalModelRequestFields`, `inferenceConfig`, `system`, or `toolConfig` fields.
If you include the `messages` field, the messages are appended after the messages defined in the prompt.
If you include the `guardrailConfig` field, the guardrail is applied to the entire prompt. If you include `guardContent` blocks in the [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html) field, the guardrail will only be applied to those blocks.
+ To test your prompt in a flow, create or edit a flow by sending a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) or [UpdateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). Include a SDK for JavaScript in Node.js of the `PromptNode` type and include the ARN of the prompt in the `promptArn` field. Then, send an [InvokeFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeFlow.html) request with an [Agents for Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-rt). For more information, see [Create and design a flow in Amazon Bedrock](flows-create.md) and [Node types for your flow](flows-nodes.md).
+ To test your prompt with an agent, use the Amazon Bedrock console (see the **Console** tab), or enter the text of the prompt into the `inputText` field of an [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html) request.

------

# Optimize a prompt
Optimize a prompt

Amazon Bedrock offers a tool to optimize prompts. Optimization rewrites prompts to yield inference results that are more suitable for your use case. You can choose the model that you want to optimize the prompt for and then generate a revised prompt. 

After you submit a prompt to optimize, Amazon Bedrock analyzes the components of the prompt. If the analysis is successful, it then rewrites the prompt. You can then copy and use the text of the optimized prompt. 

**Note**  
For best results, we recommend optimizing prompts in English.

**Topics**
+ [

## Supported Regions and models for prompt optimization
](#prompt-management-optimize-supported)
+ [

## Submit a prompt for optimization
](#prompt-management-optimize-submit)

## Supported Regions and models for prompt optimization
Supported Regions and models

The following table shows model support for prompt optimization:


| Provider | Model | Model ID | Single-region model support | 
| --- | --- | --- | --- | 
| Amazon | Nova Lite | amazon.nova-lite-v1:0 |  ap-southeast-2 eu-west-2 us-east-1  | 
| Amazon | Nova Micro | amazon.nova-micro-v1:0 |  ap-southeast-2 eu-west-2 us-east-1  | 
| Amazon | Nova Premier | amazon.nova-premier-v1:0 |  | 
| Amazon | Nova Pro | amazon.nova-pro-v1:0 |  ap-southeast-2 eu-west-2 us-east-1  | 
| Anthropic | Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |  ap-south-1 ap-southeast-2 ca-central-1 eu-central-1 eu-west-1 eu-west-2 eu-west-3 sa-east-1 us-east-1 us-west-2  | 
| Anthropic | Claude 3 Opus | anthropic.claude-3-opus-20240229-v1:0 |  | 
| Anthropic | Claude 3 Sonnet | anthropic.claude-3-sonnet-20240229-v1:0 |  ap-south-1 ap-southeast-2 ca-central-1 eu-central-1 eu-west-1 eu-west-2 eu-west-3 sa-east-1 us-east-1 us-west-2  | 
| Anthropic | Claude 3.5 Haiku | anthropic.claude-3-5-haiku-20241022-v1:0 |  us-west-2  | 
| Anthropic | Claude 3.5 Sonnet | anthropic.claude-3-5-sonnet-20240620-v1:0 |  eu-central-1 us-east-1 us-west-2  | 
| Anthropic | Claude 3.5 Sonnet v2 | anthropic.claude-3-5-sonnet-20241022-v2:0 |  ap-southeast-2 us-west-2  | 
| Anthropic | Claude 3.7 Sonnet | anthropic.claude-3-7-sonnet-20250219-v1:0 |  eu-west-2  | 
| Anthropic | Claude Opus 4 | anthropic.claude-opus-4-20250514-v1:0 |  | 
| Anthropic | Claude Sonnet 4 | anthropic.claude-sonnet-4-20250514-v1:0 |  | 
| DeepSeek | DeepSeek-R1 | deepseek.r1-v1:0 |  | 
| Meta | Llama 3 70B Instruct | meta.llama3-70b-instruct-v1:0 |  ap-south-1 ca-central-1 eu-west-2 us-east-1 us-west-2  | 
| Meta | Llama 3.1 70B Instruct | meta.llama3-1-70b-instruct-v1:0 |  us-west-2  | 
| Meta | Llama 3.2 11B Instruct | meta.llama3-2-11b-instruct-v1:0 |  | 
| Meta | Llama 3.3 70B Instruct | meta.llama3-3-70b-instruct-v1:0 |  | 
| Meta | Llama 4 Maverick 17B Instruct | meta.llama4-maverick-17b-instruct-v1:0 |  | 
| Meta | Llama 4 Scout 17B Instruct | meta.llama4-scout-17b-instruct-v1:0 |  | 
| Mistral AI | Mistral Large (24.02) | mistral.mistral-large-2402-v1:0 |  ap-south-1 ap-southeast-2 ca-central-1 eu-west-1 eu-west-2 eu-west-3 sa-east-1 us-east-1 us-west-2  | 
| Mistral AI | Mistral Large (24.07) | mistral.mistral-large-2407-v1:0 |  us-west-2  | 

## Submit a prompt for optimization


To learn how to optimize a prompt, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

You can optimize a prompt through using a playground or Prompt management in the AWS Management Console. You must select a model before you can optimize a prompt. The prompt is optimized for the model that you choose.

**To optimize a prompt in a playground**

1. To learn how to write a prompt in an Amazon Bedrock playground, follow the steps at [Generate responses in the console using playgrounds](playgrounds.md).

1. After you write a prompt and select a model, choose the wand icon (![\[Sparkle icon representing cleaning or refreshing functionality.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/icons/wand.png)). The **Optimize prompt** dialog box opens, and Amazon Bedrock begins optimizing your prompt.

1. When Amazon Bedrock finishes analyzing and optimizing your prompt, you can compare your original prompt side by side with the optimized prompt in the dialog box.

1. To replace your prompt with the optimized prompt in the playground, choose **Use optimized prompt**. To keep your original prompt, choose **Cancel**.

1. To submit the prompt and generate a response, choose **Run**.

**To optimize a prompt in Prompt management**

1. To learn how to write a prompt using Prompt management, follow the steps at [Create a prompt using Prompt management](prompt-management-create.md).

1. After you write a prompt and select a model, choose **(![\[Sparkle icon representing cleaning or refreshing functionality.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/icons/wand.png)) Optimize** at the top of the **Prompt** box.

1. When Amazon Bedrock finishes analyzing and optimizing your prompt, your optimized prompt is displayed as a variant side by side with the original prompt.

1. To use the optimized prompt instead of your original one, select **Replace original prompt**. To keep your original prompt, choose **Exit comparison** and choose to save the original prompt.
**Note**  
If you have 3 prompts in the comparison view and try to optimize another prompt, you are asked to override and replace either the original prompt or one of the variants.

1. To submit the prompt and generate a response, choose **Run**.

------
#### [ API ]

To optimize a prompt, send an [OptimizePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_OptimizePrompt.html) request with an [Agents for Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-rt). Provide the prompt to optimize in the `input` object and specify the model to optimize for in the `targetModelId` field.

The response stream returns the following events:

1. [analyzePromptEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_AnalyzePromptEvent.html) – Appears when the prompt is finished being analyzed. Contains a message describing the analysis of the prompt.

1. [optimizedPromptEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_OptimizedPromptEvent.html) – Appears when the prompt has finished being rewritten. Contains the optimized prompt.

Run the following code sample to optimize a prompt:

```
import boto3

# Set values here
TARGET_MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0" # Model to optimize for. For model IDs, see https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html
PROMPT = "Please summarize this text: " # Prompt to optimize

def get_input(prompt):
    return {
        "textPrompt": {
            "text": prompt
        }
    }
 
def handle_response_stream(response):
    try:
        event_stream = response['optimizedPrompt']
        for event in event_stream:
            if 'optimizedPromptEvent' in event:
                print("========================== OPTIMIZED PROMPT ======================\n")
                optimized_prompt = event['optimizedPromptEvent']
                print(optimized_prompt)
            else:
                print("========================= ANALYZE PROMPT =======================\n")
                analyze_prompt = event['analyzePromptEvent']
                print(analyze_prompt)
    except Exception as e:
        raise e
 
 
if __name__ == '__main__':
    client = boto3.client('bedrock-agent-runtime')
    try:
        response = client.optimize_prompt(
            input=get_input(PROMPT),
            targetModelId=TARGET_MODEL_ID
        )
        print("Request ID:", response.get("ResponseMetadata").get("RequestId"))
        print("========================== INPUT PROMPT ======================\n")
        print(PROMPT)
        handle_response_stream(response)
    except Exception as e:
        raise e
```

------

# Deploy a prompt to your application using versions in Prompt management
Deploy to your application using versions

When you save your prompt, you create a *draft version* of it. You can keep iterating on the draft version by modifying the prompt and its configurations and saving it.

When you're ready to deploy a prompt to production, you create a version of it to use in your application. A version is a snapshot of your prompt that you create at a point in time when you are iterating on the working draft of the prompt. Create versions of your prompt when you are satisfied with a set of configurations. Versions allow you to easily switch between different configurations for your prompt and update your application with the most appropriate version for your use-case.

**Topics**
+ [

# Create a version of a prompt in Prompt management
](prompt-management-version-create.md)
+ [

# View information about versions of a prompt in Prompt management
](prompt-management-version-view.md)
+ [

# Compare versions of a prompt in Prompt management
](prompt-management-version-compare.md)
+ [

# Delete a version of a prompt in Prompt management
](prompt-management-version-delete.md)

# Create a version of a prompt in Prompt management
Create a version

To learn how to create a version of your prompt, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

If you're in the prompt builder, you can create a version of your prompt by choosing **Create version**. Otherwise, do the following:

**To create a version of your prompt**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

1. In the **Prompt versions** section, choose **Create version** to take a snapshot of your draft version.

------
#### [ API ]

To create a version of your prompt, send a [CreatePromptVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreatePromptVersion.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the prompt as the `promptIdentifier`.

The response returns an ID and ARN for the version. Versions are created incrementally, starting from 1.

------

# View information about versions of a prompt in Prompt management
View information about versions

To learn how to view information about a version of your prompt, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To view information about a version of your prompt**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

1. In the **Prompt versions** section, choose a version.

1. In the **Version details** page, you can see information about the version, the prompt message, and its configurations. For more information about testing a version of the prompt, see [Test a prompt using Prompt management](prompt-management-test.md).

------
#### [ API ]

To get information about a version of your prompt, send a [GetPrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetPrompt.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the prompt as the `promptIdentifier`. In the `promptVersion` field, specify the version number.

------

# Compare versions of a prompt in Prompt management
Compare versions

The Amazon Bedrock console offers a tool to let you compare versions of a prompt that you've created in Prompt management. The tool highlights fields that exist in one version that don't exist in the other.

**To compare prompt versions**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

1. In the **Versions** section, select the checkboxes next to two prompts to compare.

1. Choose **Compare**.

1. The JSON objects defining each prompt version are shown side by side. Differences between the versions are shown as follows:
   + Fields that exist in one version, but don't exist in the other, are marked by a plus (\$1) symbol and highlighted in green.
   + Fields that don't exist in one version, but exist in the other, are marked by a minus (-) symbol and highlighted in red.

1. To compare output model responses for the different versions, fill in the **Test variables** and choose **Run prompt**.

# Delete a version of a prompt in Prompt management
Delete a version

To learn how to delete a version of your prompt, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To delete a version of your prompt**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane. Then, choose a prompt in the **Prompts** section.

1. In the **Prompt versions** section, select a version and choose **Delete**.

1. In the **Version details** page, you can see information about the version, the prompt message, and its configurations. For more information about testing a version of the prompt, see [Test a prompt using Prompt management](prompt-management-test.md).

1. Review the warning that appears, type **confirm**, and then choose **Delete**.

------
#### [ API ]

To delete a version of your prompt, send a [DeletePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeletePrompt.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the prompt as the `promptIdentifier`. In the `promptVersion` field, specify the version number to delete.

------

# Delete a prompt in Prompt management
Delete a prompt

If you no longer need a prompt, you can delete it. Prompts that you delete are retained in the AWS servers for up to fourteen days. To learn how to delete a prompt using Prompt management, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

If you're in the **Prompt details** page for a prompt or in the prompt builder, choose **Delete** to delete a prompt.

**Note**  
If you delete a prompt, all its versions will also be deleted. Any resources using your prompt might experience runtime errors. Remember to disassociate the prompt from any resources using it.

**To delete a prompt**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Prompt management** from the left navigation pane.

1. Select a prompt and choose **Delete**.

1. Review the warning that appears, type **confirm**, and then choose **Delete**.

------
#### [ API ]

To delete a prompt, send a [DeletePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeletePrompt.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the prompt as the `promptIdentifier`. To delete a specific version of the prompt, specify the version number in the `promptVersion` field.

------

# Run Prompt management code samples
Run code samples

To try out some code samples for Prompt management, choose the tab for your preferred method, and then follow the steps: The following code samples assume that you've set up your credentials to use the AWS API. If you haven't, refer to [Get started with the API](getting-started-api.md).

------
#### [ Python ]

1. Run the following code snippet to load the AWS SDK for Python (Boto3), create a client, and create a prompt that creates a music playlist using two variables (`genre` and `number`) by making a [CreatePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreatePrompt.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # Create a prompt in Prompt management
   import boto3
   
   # Create an Amazon Bedrock Agents client
   client = boto3.client(service_name="bedrock-agent")
   
   # Create the prompt
   response = client.create_prompt(
       name="MakePlaylist",
       description="My first prompt.",
       variants=[
           { 
               "name": "Variant1",
               "modelId": "amazon.titan-text-express-v1",
               "templateType": "TEXT",
               "inferenceConfiguration": {
                   "text": {
                       "temperature": 0.8
                   }
               },
               "templateConfiguration": { 
                   "text": {
                       "text": "Make me a {{genre}} playlist consisting of the following number of songs: {{number}}."
                   }
               }
         }
       ]
   )
                           
   prompt_id = response.get("id")
   ```

1. Run the following code snippet to see the prompt that you just created (alongside any other prompts in your account) to make a [ListPrompts](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListPrompts.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # List prompts that you've created
   client.list_prompts()
   ```

1. You should see the ID of the prompt you created in the `id` field in the object in the `promptSummaries` field. Run the following code snippet to show information for the prompt that you created by making a [GetPrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetPrompt.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # Get information about the prompt that you created
   client.get_prompt(promptIdentifier=prompt_id)
   ```

1. Create a version of the prompt and get its ID by running the following code snippet to make a [CreatePromptVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreatePromptVersion.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # Create a version of the prompt that you created
   response = client.create_prompt_version(promptIdentifier=prompt_id)
                           
   prompt_version = response.get("version")
   prompt_version_arn = response.get("arn")
   ```

1. View information about the prompt version that you just created, alongside information about the draft version, by running the following code snippet to make a [ListPrompts](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListPrompts.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # List versions of the prompt that you just created
   client.list_prompts(promptIdentifier=prompt_id)
   ```

1. View information for the prompt version that you just created by running the following code snippet to make a [GetPrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetPrompt.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # Get information about the prompt version that you created
   client.get_prompt(
       promptIdentifier=prompt_id, 
       promptVersion=prompt_version
   )
   ```

1. Test the prompt by adding it to a flow by following the steps at [Run Amazon Bedrock Flows code samples](flows-code-ex.md). In the first step when you create the flow, run the following code snippet instead to use the prompt that you created instead of defining an inline prompt in the flow (replace the ARN of the prompt version in the `promptARN` field with the ARN of the version of the prompt that you created):

   ```
   # Import Python SDK and create client
   import boto3
   
   client = boto3.client(service_name='bedrock-agent')
   
   FLOWS_SERVICE_ROLE = "arn:aws:iam::123456789012:role/MyPromptFlowsRole" # Flows service role that you created. For more information, see https://docs.aws.amazon.com/bedrock/latest/userguide/flows-permissions.html
   PROMPT_ARN = prompt_version_arn # ARN of the prompt that you created, retrieved programatically during creation.
   
   # Define each node
   
   # The input node validates that the content of the InvokeFlow request is a JSON object.
   input_node = {
       "type": "Input",
       "name": "FlowInput",
       "outputs": [
           {
               "name": "document",
               "type": "Object"
           }
       ]
   }
   
   # This prompt node contains a prompt that you defined in Prompt management.
   # It validates that the input is a JSON object that minimally contains the fields "genre" and "number", which it will map to the prompt variables.
   # The output must be named "modelCompletion" and be of the type "String".
   prompt_node = {
       "type": "Prompt",
       "name": "MakePlaylist",
       "configuration": {
           "prompt": {
               "sourceConfiguration": {
                   "resource": {
                       "promptArn": ""
                   }
               }
           }
       },
       "inputs": [
           {
               "name": "genre",
               "type": "String",
               "expression": "$.data.genre"
           },
           {
               "name": "number",
               "type": "Number",
               "expression": "$.data.number"
           }
       ],
       "outputs": [
           {
               "name": "modelCompletion",
               "type": "String"
           }
       ]
   }
   
   # The output node validates that the output from the last node is a string and returns it as is. The name must be "document".
   output_node = {
       "type": "Output",
       "name": "FlowOutput",
       "inputs": [
           {
               "name": "document",
               "type": "String",
               "expression": "$.data"
           }
       ]
   }
   
   # Create connections between the nodes
   connections = []
   
   #   First, create connections between the output of the flow input node and each input of the prompt node
   for input in prompt_node["inputs"]:
       connections.append(
           {
               "name": "_".join([input_node["name"], prompt_node["name"], input["name"]]),
               "source": input_node["name"],
               "target": prompt_node["name"],
               "type": "Data",
               "configuration": {
                   "data": {
                       "sourceOutput": input_node["outputs"][0]["name"],
                       "targetInput": input["name"]
                   }
               }
           }
       )
   
   # Then, create a connection between the output of the prompt node and the input of the flow output node
   connections.append(
       {
           "name": "_".join([prompt_node["name"], output_node["name"]]),
           "source": prompt_node["name"],
           "target": output_node["name"],
           "type": "Data",
           "configuration": {
               "data": {
                   "sourceOutput": prompt_node["outputs"][0]["name"],
                   "targetInput": output_node["inputs"][0]["name"]
               }
           }
       }
   )
   
   # Create the flow from the nodes and connections
   client.create_flow(
       name="FlowCreatePlaylist",
       description="A flow that creates a playlist given a genre and number of songs to include in the playlist.",
       executionRoleArn=FLOWS_SERVICE_ROLE,
       definition={
           "nodes": [input_node, prompt_node, output_node],
           "connections": connections
       }
   )
   ```

1. Delete the prompt version that you just created by running the following code snippet to make a [DeletePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeletePrompt.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # Delete the prompt version that you created
   client.delete_prompt(
       promptIdentifier=prompt_id, 
       promptVersion=prompt_version
   )
   ```

1. Fully delete the prompt that you just created by running the following code snippet to make a [DeletePrompt](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeletePrompt.html) [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   # Delete the prompt that you created
   client.delete_prompt(
       promptIdentifier=prompt_id
   )
   ```

------