

# Use cases for Amazon Bedrock Guardrails
Guardrails use cases

After you create a guardrail, you can apply with the following features:
+ [Model inference](inference.md) – Apply a guardrail to submitted prompts and generated responses when running inference on a model.
+ [Agents](agents.md) – Associate a guardrail with an agent to apply it to prompts sent to the agent and responses returned from it.
+ [Knowledge base](knowledge-base.md) – Apply a guardrail when querying a knowledge base and generating responses from it.
+ [Flow](flows.md) – Add a guardrail to a prompt node or knowledge base node in a flow to apply it to inputs and outputs of these nodes.

The following table describes how to include a guardrail for each of these features using the AWS Management Console or the Amazon Bedrock API.


****  

| Use case | Console | API | 
| --- | --- | --- | 
| Model inference | Select the guardrail when [using a playground](playgrounds.md). | Specify in the header in an [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) request or include in the guardrailConfig field in the body of a [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) or [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html) request. | 
| Associate with an agent | When you [create or update](agents-build-modify.md) the agent, specify in the Guardrail details section of the Agent builder. | Include a guardrailConfiguration field in the body of a [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateAgent.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateAgent.html) or [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateAgent.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateAgent.html) request. | 
| Use when querying a knowledge base | Follow the steps in the [Guardrails](kb-test-config.md#kb-test-config-guardrails) section of the query configurations. Add a guardrail when you set Configurations. | Include a guardrailConfiguration field in the body of a [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_RetrieveAndGenerate.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_RetrieveAndGenerate.html) request. | 
| Include in a prompt node in a flow | When you [create](flows-create.md) or [update](flows-modify.md) a flow, select the prompt node and specify the guardrail in the Configure section. | When you define the prompt node in the nodes field in a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) or [UpdateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlow.html) request, include a guardrailConfiguration field in the [PromptFlowNodeConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PromptFlowNodeConfiguration.html). | 
| Include in a knowledge base node in a flow | When you [create](flows-create.md) or [update](flows-modify.md) a flow, select the knowledge base node and specify the guardrail in the Configure section. | When you define the knowledge base node in the nodes field in a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) or [UpdateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlow.html) request, include a guardrailConfiguration field in the [KnowledgeBaseFlowNodeConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_KnowledgeBaseFlowNodeConfiguration.html). | 

This section covers using a guardrail with model inference and the Amazon Bedrock API. You can use the base inference operations ([InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)) and the Converse API ([Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) and [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html)). With both sets of operations you can use a guardrail with synchronous and streaming model inference. You can also selectively evaluate user input and can configure streaming response behavior. 

**Topics**
+ [

# Use your guardrail with inference operations to evaluate user input
](guardrails-input-tagging-base-inference.md)
+ [

# Use the ApplyGuardrail API in your application
](guardrails-use-independent-api.md)

# Use your guardrail with inference operations to evaluate user input
Use with inference operations

You can use guardrails with the base inference operations, [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) (streaming). This section covers how you selectively evaluate user input and how you can configure streaming response behavior. Note that for conversational applications, you can achieve the same results with the [Converse API](guardrails-use-converse-api.md).

For example code that calls the base inference operations, see [Submit a single prompt with InvokeModel](inference-invoke.md). For information about using a guardrail with the base inference operations, follow the steps in the API tab of [Test your guardrail](guardrails-test.md). 

**Topics**
+ [

# Apply tags to user input to filter content
](guardrails-tagging.md)
+ [

# Configure streaming response behavior to filter content
](guardrails-streaming.md)
+ [

# Include a guardrail with the Converse API
](guardrails-use-converse-api.md)

# Apply tags to user input to filter content
Apply tags to user input

Input tags allow you to mark specific content within the input text that you want to be processed by guardrails. This is useful when you want to apply guardrails to certain parts of the input, while leaving other parts unprocessed.

For example, the input prompt in RAG applications may contain system prompts, search results from trusted documentation sources, and user queries. As system prompts are provided by the developer and search results are from trusted sources, you may just need the guardrails evaluation only on the user queries.

In another example, the input prompt in conversational applications may contain system prompts, conversation history, and the current user input. System prompts are developer specific instructions, and conversation history contain historical user input and model responses that may have already been evaluated by guardrails. For such a scenario, you may only want to evaluate the current user input.

By using input tags, you can better control which parts of the input prompt should be processed and evaluated by guardrails, ensuring that your safeguards are customized to your use cases. This also helps in improving performance, and reducing costs, as you have the flexibility to evaluate a relatively shorter and relevant section of the input, instead of the entire input prompt.

**Tag content for guardrails**

To tag content for guardrails to process, use the XML tag that is a combination of a reserved prefix and a custom `tagSuffix`. For example:

```
{
    "text": """
        You are a helpful assistant.
        Here is some information about my account:
          - There are 10,543 objects in an S3 bucket.
          - There are no active EC2 instances.
        Based on the above, answer the following question:
        Question: 
        <amazon-bedrock-guardrails-guardContent_xyz>
        How many objects do I have in my S3 bucket? 
        </amazon-bedrock-guardrails-guardContent_xyz>
         ...
        Here are other user queries:
        <amazon-bedrock-guardrails-guardContent_xyz>
        How do I download files from my S3 bucket?
        </amazon-bedrock-guardrails-guardContent_xyz>    
    """,
    "amazon-bedrock-guardrailConfig": {
        "tagSuffix": "xyz"
    }
}
```

In the preceding example, the content *`How many objects do I have in my S3 bucket?`* and "*"How do I download files from my S3 bucket?*" is tagged for guardrails processing using the tag `<amazon-bedrock-guardrails-guardContent_xyz>`. Note that the prefix `amazon-bedrock-guardrails-guardContent` is reserved by guardrails.

**Tag Suffix**

The tag suffix (`xyz` in the preceding example) is a dynamic value that you must provide in the `tagSuffix` field in `amazon-bedrock-guardrailConfig` to use input tagging. It is recommended to use a new, random string as the `tagSuffix` for every request. This helps mitigate potential prompt injection attacks by making the tag structure unpredictable. A static tag can result in a malicious user closing the XML tag and appending malicious content after the tag closure, resulting in an *injection attack*. You are limited to alphanumeric characters with a length between 1 and 20 characters, inclusive. With the example suffix `xyz`, you must enclose all the content to be guarded using the XML tags with your suffix: `<amazon-bedrock-guardrails-guardContent_xyz>`*your content*`</amazon-bedrock-guardrails-guardContent_xyz>`. We recommend that you use a dynamic unique identifier for each request as a tag suffix.

**Multiple tags**

You can use the same tag structure multiple times in the input text to mark different parts of the content for guardrails processing. Nesting of tags is not allowed.

**Untagged content**

Content outside of input tags isn't processed by guardrails. This allows you to include instructions, sample conversations, knowledge bases, or other content that you deem safe and don't want to be processed by guardrails. If there are no tags in the input prompt, the complete prompt will be processed by guardrails. The only exception is [Detect prompt attacks with Amazon Bedrock Guardrails](guardrails-prompt-attack.md) filters, which require input tags to be present.

# Configure streaming response behavior to filter content
Streaming responses

The [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) API returns data in a streaming format. This allows you to access responses in chunks without waiting for the entire result. When using guardrails with a streaming response, there are two modes of operation: synchronous and asynchronous.

**Synchronous mode**

In the default synchronous mode, guardrails will buffer and apply the configured policies to one or more response chunks before the response is sent back to the user. The synchronous processing mode introduces some latency to the response chunks, as it means that the response is delayed until the guardrails scan completes. However, it provides better accuracy, as every response chunk is scanned by guardrails before being sent to the user.

**Asynchronous mode**

In asynchronous mode, guardrails sends the response chunks to the user as soon as they become available, while asynchronously applying the configured policies in the background. The advantage is that response chunks are provided immediately with no latency impact, but response chunks may contain inappropriate content until guardrails scan completes. As soon as inappropriate content is identified, subsequent chunks will be blocked by guardrails.

**Warning**  
Amazon Bedrock Guardrails doesn't support the masking of sensitive information with asynchronous mode.

**Enabling asynchronous mode**

To enable asynchronous mode, you need to include the `streamProcessingMode` parameter in the `amazon-bedrock-guardrailConfig` object of your `InvokeModelWithResponseStream` request:

```
{
   "amazon-bedrock-guardrailConfig": {
   "streamProcessingMode": "ASYNCHRONOUS"
   }
}
```

By understanding the trade-offs between the synchronous and asynchronous modes, you can choose the appropriate mode based on your application's requirements for latency and content moderation accuracy.

# Include a guardrail with the Converse API
Include a guardrail with Converse API

You can use a guardrail to guard conversational apps that you create with the Converse API. For example, if you create a chat app with Converse API, you can use a guardrail to block inappropriate content entered by the user and inappropriate content generated by the model. For information about the Converse API, see [Carry out a conversation with the Converse API operations](conversation-inference.md). 

**Topics**
+ [

## Call the Converse API with guardrails
](#guardrails-use-converse-api-call)
+ [

## Processing the response when using the Converse API
](#guardrails-use-converse-api-response)
+ [

## Code example for using Converse API with guardrails
](#converse-api-guardrail-example)

## Call the Converse API with guardrails


To use a guardrail, you include configuration information for the guardrail in calls to the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) or [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html) (for streaming responses) operations. Optionally, you can select specific content in the message that you want the guardrail to assess. For information about the models that you can use with guardrails and the Converse API, see [Supported models and model features](conversation-inference-supported-models-features.md). 

**Topics**
+ [

### Configure a guardrail to work with the Converse API
](#guardrails-use-converse-api-call-configure)
+ [

### Evaluate only specific content in a message
](#guardrails-use-converse-api-call-message)
+ [

### Guarding a system prompt sent to the Converse API
](#guardrails-use-converse-api-call-message-system-guard)
+ [

### Message and system prompt guardrail behavior
](#guardrails-use-converse-api-call-message-system-message-guard)

### Configure a guardrail to work with the Converse API


You specify guardrail configuration information in the `guardrailConfig` input parameter. The configuration includes the ID and the version of the guardrail that you want to use. You can also enable tracing for the guardrail, which provides information about the content that the guardrail blocked. 

With the `Converse` operation, `guardrailConfig` is a [GuardrailConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailConfiguration.html) object, as shown in the following example.

```
{
        "guardrailIdentifier": "Guardrail ID",
        "guardrailVersion": "Guardrail version",
        "trace": "enabled"
}
```

If you use `ConverseStream`, you pass a [GuardrailStreamConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailStreamConfiguration.html) object. Optionally, you can use the `streamProcessingMode` field to specify that you want the model to complete the guardrail assessment, before returning streaming response chunks. Or, you can have the model asynchronously respond whilst the guardrail continues its assessment in the background. For more information, see [Configure streaming response behavior to filter content](guardrails-streaming.md).

### Evaluate only specific content in a message


When you pass a [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) to a model, your guardrail assesses the content in the message. You also can asses specific parts of a message by using the `guardContent` ([GuardrailConverseContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailConverseContentBlock.html)) field.

**Tip**  
Using the `guardContent` field is similar to using input tags with [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html). For more information, see [Apply tags to user input to filter content](guardrails-tagging.md). 

For example, the following guardrail evaluates only the content in the `guardContent` field and not the rest of the message. This is useful for having the guardrail assess only the most recent message in a conversation, as shown in the following example.

```
[
    {
        "role": "user",
        "content": [
            {
                "text": "Create a playlist of 2 pop songs."
            }
        ]
    },
    {
        "role": "assistant",
        "content": [
            {
                "text": "Sure! Here are two pop songs:\n1. \"Bad Habits\" by Ed Sheeran\n2. \"All Of The Lights\" by Kanye West\n\nWould you like to add any more songs to this playlist?"
            }
        ]
    },
    {
        "role": "user",
        "content": [
            {
                "guardContent": {
                    "text": {
                        "text": "Create a playlist of 2 heavy metal songs."
                    }
                }
            }
        ]
    }
]
```

Another use case of `guardContent` is providing additional context for a message without your guardrail assessing that context. In the following example, the guardrail only assesses `"Create a playlist of heavy metal songs"` and ignores the `"Only answer with a list of songs"`.

```
messages = [
    {
        "role": "user",
        "content": [
            {
                "text": "Only answer with a list of songs."
            },
            {
                "guardContent": {
                    "text": {
                        "text": "Create a playlist of heavy metal songs."
                    }
                }
            }
        ]
    }
]
```

If content isn't in a `guardContent` block, that doesn't necessarily mean it won't be evaluated. This behavior depends on what filtering polices the guardrail uses. 

The following example shows two `guardContent` blocks with [contextual grounding checks](guardrails-contextual-grounding-check.md) (based on the `qualifiers` fields). The contextual grounding checks in the guardrail will only evaluate the content in these blocks. However, if the guardrail also has a [word filter](guardrails-content-filters.md) that blocks the word "background", the text "Some additional background information." will still be evaluated, even though it's not in a `guardContent` block.

```
[{
    "role": "user",
    "content": [{
            "guardContent": {
                "text": {
                    "text": "London is the capital of UK. Tokyo is the capital of Japan.",
                    "qualifiers": ["grounding_source"]
                }
            }
        },
        {
            "text": "Some additional background information."
        },
        {
            "guardContent": {
                "text": {
                    "text": "What is the capital of Japan?",
                    "qualifiers": ["query"]
                }
            }
        }
    ]
}]
```

### Guarding a system prompt sent to the Converse API


You can use guardrails with system prompts that you send to the Converse API. To guard a system prompt, specify the `guardContent` ([SystemContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_SystemContentBlock.html)) field in the system prompt that you pass to the API, as shown in the following example.

```
[
    {
        "guardContent": {
            "text": {
                "text": "Only respond with Welsh heavy metal songs."
            }
        }
    }
]
```

If you don't provide the `guardContent` field, the guardrail doesn't assess the system prompt message. 

### Message and system prompt guardrail behavior


How the guardrail assesses `guardContent` field behaves differently between system prompts and messages that you pass in the message.


|  | System prompt has guardrail block | System prompt doesn't have guardrail block | 
| --- | --- | --- | 
|  **Messages have guardrail block**  |  System: Guardrail investigates content in guardrail block Messages: Guardrail investigates content in guardrail block  | System: Guardrail investigates nothing Messages: Guardrail investigates content in guardrail block | 
|  **Messages don't have guardrail block**  |  System: Guardrail investigates content in guardrail block Messages: Guardrail investigates everything  |  System: Guardrail investigates nothing Messages: Guardrail investigates everything  | 

## Processing the response when using the Converse API


When you call the Converse operation, the guardrail assesses the message that you send. If the guardrail detects blocked content, the following happens.
+ The `stopReason` field in the response is set to `guardrail_intervened`.
+ If you enabled tracing, the trace is available in the `trace` ([ConverseTrace](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseTrace.html)) Field. With `ConverseStream`, the trace is in the metadata ([ConverseStreamMetadataEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStreamMetadataEvent.html)) that operation returns. 
+ The blocked content text that you have configured in the guardrail is returned in the `output` ([ConverseOutput](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseOutput.html)) field. With `ConverseStream` the blocked content text is in the streamed message.

The following partial response shows the blocked content text and the trace from the guardrail assessment. The guardrail has blocked the term *Heavy metal* in the message. 

```
{
    "output": {
        "message": {
            "role": "assistant",
            "content": [
                {
                    "text": "Sorry, I can't answer questions about heavy metal music."
                }
            ]
        }
    },
    "stopReason": "guardrail_intervened",
    "usage": {
        "inputTokens": 0,
        "outputTokens": 0,
        "totalTokens": 0
    },
    "metrics": {
        "latencyMs": 721
    },
    "trace": {
        "guardrail": {
            "inputAssessment": {
                "3o06191495ze": {
                    "topicPolicy": {
                        "topics": [
                            {
                                "name": "Heavy metal",
                                "type": "DENY",
                                "action": "BLOCKED"
                            }
                        ]
                    },
                    "invocationMetrics": {
                        "guardrailProcessingLatency": 240,
                        "usage": {
                            "topicPolicyUnits": 1,
                            "contentPolicyUnits": 0,
                            "wordPolicyUnits": 0,
                            "sensitiveInformationPolicyUnits": 0,
                            "sensitiveInformationPolicyFreeUnits": 0,
                            "contextualGroundingPolicyUnits": 0
                        },
                        "guardrailCoverage": {
                            "textCharacters": {
                                "guarded": 39,
                                "total": 72
                            }
                        }
                    }
                }
            }
        }
    }
}
```

## Code example for using Converse API with guardrails


This example shows how to guard a conversation with the `Converse` and `ConverseStream` operations. The example shows how to prevent a model from creating a playlist that includes songs from the heavy metal genre. 

**To guard a conversation**

1. Create a guardrail by following the instructions at [Create your guardrail](guardrails-components.md). 
   + **Name** – Enter *Heavy metal*. 
   + **Definition for topic** – Enter *Avoid mentioning songs that are from the heavy metal genre of music.* 
   + **Add sample phrases** – Enter *Create a playlist of heavy metal songs.*

   In step 9, enter the following:
   + **Messaging shown for blocked prompts** – Enter *Sorry, I can't answer questions about heavy metal music.* 
   + **Messaging for blocked responses** – Enter *Sorry, the model generated an answer that mentioned heavy metal music.*

   You can configure other guardrail options, but it is not required for this example.

1. Create a version of the guardrail by following the instructions at [Create a version of a guardrail](guardrails-versions-create.md).

1. In the following code examples ([Converse](#converse-api-guardrail-example-converse) and [ConverseStream](#converse-api-guardrail-example-converse-stream)), set the following variables:
   + `guardrail_id` – The ID of the guardrail that you created in step 1.
   + `guardrail_version` – The version of the guardrail that you created in step 2.
   + `text` – Use `Create a playlist of heavy metal songs.` 

1. Run the code examples. The output should should display the guardrail assessment and the output message `Text: Sorry, I can't answer questions about heavy metal music.`. The guardrail input assessment shows that the model detected the term *heavy metal* in the input message.

1. (Optional) Test that the guardrail blocks inappropriate text that the model generates by changing the value of `text` to *List all genres of rock music.*. Run the examples again. You should see an output assessment in the response. 

------
#### [ Converse ]

The following code uses your guardrail with the `Converse` operation.

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use a guardrail with the <noloc>Converse</noloc> API.
"""

import logging
import json
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_conversation(bedrock_client,
                          model_id,
                          messages,
                          guardrail_config):
    """
    Sends a message to a model.
    Args:
        bedrock_client: The Boto3 Bedrock runtime client.
        model_id (str): The model ID to use.
        messages JSON): The message to send to the model.
        guardrail_config : Configuration for the guardrail.

    Returns:
        response (JSON): The conversation that the model generated.

    """

    logger.info("Generating message with model %s", model_id)

    # Send the message.
    response = bedrock_client.converse(
        modelId=model_id,
        messages=messages,
        guardrailConfig=guardrail_config
    )

    return response


def main():
    """
    Entrypoint for example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    # The model to use.
    model_id="meta.llama3-8b-instruct-v1:0"

    # The ID and version of the guardrail.
    guardrail_id = "Your guardrail ID"
    guardrail_version = "DRAFT"

    # Configuration for the guardrail.
    guardrail_config = {
        "guardrailIdentifier": guardrail_id,
        "guardrailVersion": guardrail_version,
        "trace": "enabled"
    }

    text = "Create a playlist of 2 heavy metal songs."
    context_text = "Only answer with a list of songs."

    # The message for the model and the content that you want the guardrail to assess.
    messages = [
        {
            "role": "user",
            "content": [
                {
                    "text": context_text,
                },
                {
                    "guardContent": {
                        "text": {
                            "text": text
                        }
                    }
                }
            ]
        }
    ]

    try:

        print(json.dumps(messages, indent=4))

        bedrock_client = boto3.client(service_name='bedrock-runtime')

        response = generate_conversation(
            bedrock_client, model_id, messages, guardrail_config)

        output_message = response['output']['message']

        if response['stopReason'] == "guardrail_intervened":
            trace = response['trace']
            print("Guardrail trace:")
            print(json.dumps(trace['guardrail'], indent=4))

        for content in output_message['content']:
            print(f"Text: {content['text']}")

    except ClientError as err:
        message = err.response['Error']['Message']
        logger.error("A client error occurred: %s", message)
        print(f"A client error occured: {message}")

    else:
        print(
            f"Finished generating text with model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ ConverseStream ]

The following code uses your guardrail with the `ConverseStream` operation.

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use a guardrail with the ConverseStream operation.
"""

import logging
import json
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def stream_conversation(bedrock_client,
                    model_id,
                    messages,
                    guardrail_config):
    """
    Sends messages to a model and streams the response.
    Args:
        bedrock_client: The Boto3 Bedrock runtime client.
        model_id (str): The model ID to use.
        messages (JSON) : The messages to send.
        guardrail_config : Configuration for the guardrail.


    Returns:
        Nothing.

    """

    logger.info("Streaming messages with model %s", model_id)

    response = bedrock_client.converse_stream(
        modelId=model_id,
        messages=messages,
        guardrailConfig=guardrail_config
    )

    stream = response.get('stream')
    if stream:
        for event in stream:

            if 'messageStart' in event:
                print(f"\nRole: {event['messageStart']['role']}")

            if 'contentBlockDelta' in event:
                print(event['contentBlockDelta']['delta']['text'], end="")

            if 'messageStop' in event:
                print(f"\nStop reason: {event['messageStop']['stopReason']}")

            if 'metadata' in event:
                metadata = event['metadata']
                if 'trace' in metadata:
                    print("\nAssessment")
                    print(json.dumps(metadata['trace'], indent=4))


def main():
    """
    Entrypoint for streaming message API response example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    # The model to use.
    model_id = "amazon.titan-text-express-v1"

    # The ID and version of the guardrail.
    guardrail_id = "Change to your guardrail ID"
    guardrail_version = "DRAFT"

    # Configuration for the guardrail.
    guardrail_config = {
        "guardrailIdentifier": guardrail_id,
        "guardrailVersion": guardrail_version,
        "trace": "enabled",
        "streamProcessingMode" : "sync"
    }

    text = "Create a playlist of heavy metal songs."
  
    # The message for the model and the content that you want the guardrail to assess.
    messages = [
        {
            "role": "user",
            "content": [
                {
                    "text": text,
                },
                {
                    "guardContent": {
                        "text": {
                            "text": text
                        }
                    }
                }
            ]
        }
    ]

    try:
        bedrock_client = boto3.client(service_name='bedrock-runtime')

        stream_conversation(bedrock_client, model_id, messages,
                        guardrail_config)

    except ClientError as err:
        message = err.response['Error']['Message']
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))

    else:
        print(
            f"Finished streaming messages with model {model_id}.")


if __name__ == "__main__":
    main()
```

------

# Use the ApplyGuardrail API in your application
Use the ApplyGuardrail API in your application

Guardrails is used to implement safeguards for your generative AI applications that are customized for your use cases and aligned with your responsible AI policies. Guardrails allows you to configure denied topics, filter harmful content, and remove sensitive information. 

You can use the `ApplyGuardrail` API to assess any text using your pre-configured Amazon Bedrock Guardrails, without invoking the foundation models. 

Features of the `ApplyGuardrail` API include:
+ **Content validation** – You can send any text input or output to the `ApplyGuardrail` API to compare it with your defined topic avoidance rules, content filters, PII detectors, and word block lists. You can evaluate user inputs and FM generated outputs independently.
+ **Flexible deployment** – You can integrate the `ApplyGuardrail` API anywhere in your application flow to validate data before processing or serving results to the user. For example, if you are using a RAG application, you can now evaluate the user input prior to performing the retrieval, instead of waiting until the final response generation.
+ **Decoupled from foundation models** – `ApplyGuardrail` API is decoupled from foundational models. You can now use Guardrails without invoking Foundation Models. You can use the assessment results to design the experience on your generative AI application.

**Topics**
+ [

## Call ApplyGuardrail in your application flow
](#guardrails-use-independent-api-call)
+ [

## Specify the guardrail to use with ApplyGuardrail
](#guardrails-use-indepedent-api-call-configure)
+ [

## Example use cases of ApplyGuardrail
](#guardrails-use-independent-api-call-message)
+ [

## Return full output in ApplyGuardrail response
](#guardrails-use-return-full-assessment)

## Call ApplyGuardrail in your application flow


The request allows customer to pass all their content that should be guarded using their defined Guardrails. The source field should be set to `INPUT` when the content to evaluated is from a user (typically the input prompt to the LLM). The source should be set to `OUTPUT` when the model output guardrails should be enforced (typically the LLM response). 

## Specify the guardrail to use with ApplyGuardrail


When using `ApplyGuardrail`, you specify the `guardrailIdentifier` and `guardrailVersion` of the guardrail that you want to use. You can also enable tracing for the guardrail, which provides information about the content that the guardrail blocks.

------
#### [ ApplyGuardrail API request ]

```
POST /guardrail/{guardrailIdentifier}/version/{guardrailVersion}/apply HTTP/1.1
{
    "source": "INPUT" | "OUTPUT",
    "content": [{
        "text": {
            "text": "string",
        }
    }, ]
}
```

------
#### [ ApplyGuardrail API response ]

```
{
    "usage": { 
          "topicPolicyUnits": "integer",
          "contentPolicyUnits": "integer",
          "wordPolicyUnits": "integer",
          "sensitiveInformationPolicyUnits": "integer",
          "sensitiveInformationPolicyFreeUnits": "integer",
          "contextualGroundingPolicyUnits": "integer"
     },
    "action": "GUARDRAIL_INTERVENED" | "NONE",
    "output": [
            // if guardrail intervened and output is masked we return request in same format
            // with masking
            // if guardrail intervened and blocked, output is a single text with canned message
            // if guardrail did not intervene, output is empty array
            {
                "text": "string",
            },
    ],
    "assessments": [{
        "topicPolicy": {
                "topics": [{
                    "name": "string",
                    "type": "DENY",
                    "action": "BLOCKED",
                }]
            },
            "contentPolicy": {
                "filters": [{
                    "type": "INSULTS | HATE | SEXUAL | VIOLENCE | MISCONDUCT |PROMPT_ATTACK",
                    "confidence": "NONE" | "LOW" | "MEDIUM" | "HIGH",
                    "filterStrength": "NONE" | "LOW" | "MEDIUM" | "HIGH",
                "action": "BLOCKED"
                }]
            },
            "wordPolicy": {
                "customWords": [{
                    "match": "string",
                    "action": "BLOCKED"
                }],
                "managedWordLists": [{
                    "match": "string",
                    "type": "PROFANITY",
                    "action": "BLOCKED"
                }]
            },
            "sensitiveInformationPolicy": {
                "piiEntities": [{
                    // for all types see: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GuardrailPiiEntityConfig.html#bedrock-Type-GuardrailPiiEntityConfig-type
                    "type": "ADDRESS" | "AGE" | ...,
                    "match": "string",
                    "action": "BLOCKED" | "ANONYMIZED"
                }],
                "regexes": [{
                    "name": "string",
                    "regex": "string",
                    "match": "string",
                    "action": "BLOCKED" | "ANONYMIZED"
                }],
            "contextualGroundingPolicy": {
                 "filters": [{
                   "type": "GROUNDING | RELEVANCE",
                   "threshold": "double",
                   "score": "double",
                   "action": "BLOCKED | NONE"
                 }]
            },
            "invocationMetrics": {
                "guardrailProcessingLatency": "integer",
                "usage": {
                    "topicPolicyUnits": "integer",
                    "contentPolicyUnits": "integer",
                    "wordPolicyUnits": "integer",
                    "sensitiveInformationPolicyUnits": "integer",
                    "sensitiveInformationPolicyFreeUnits": "integer",
                    "contextualGroundingPolicyUnits": "integer"
                },
                "guardrailCoverage": {
                    "textCharacters": {
                        "guarded":"integer",
                        "total": "integer"
                    }
                }
            }
        },
        "guardrailCoverage": {
            "textCharacters": {
                "guarded": "integer",
                "total": "integer"
            }
        }
    ]
}
```

------

## Example use cases of ApplyGuardrail


The outputs of the `ApplyGuardrail` request depends on the action guardrail took on the passed content.
+ If guardrail intervened where the content is only masked, the exact content is returned with masking applied.
+ If guardrail intervened and blocked the request content, the outputs field will be a single text, which is the canned message based on guardrail configuration.
+ If no guardrail action was taken on the request content, the outputs array is empty.

------
#### [ Guardrails takes no action ]

**Request example**

```
{
    "source": "OUTPUT",
    "content": [
        "text": {
            "text": "Hi, my name is Zaid. Which car brand is reliable?"
        }
    ]
}
```

**Response example**

```
{
    "usage": {
        "topicPolicyUnitsProcessed": 1,
        "contentPolicyUnitsProcessed": 1,
        "wordPolicyUnitsProcessed": 0,
        "sensitiveInformationPolicyFreeUnits": 0
    },
    "action": "NONE",
    "outputs": [],
    "assessments": [{}]
}
```

------
#### [ Guardrails blocks content ]

**Response example**

```
{
    "usage": {
        "topicPolicyUnitsProcessed": 1,
        "contentPolicyUnitsProcessed": 1,
        "wordPolicyUnitsProcessed": 0,
        "sensitiveInformationPolicyFreeUnits": 0
    },
    "action": "GUARDRAIL_INTERVENED",
    "outputs": [{
        "text": "Configured guardrail canned message (i.e., can't respond)"
    }],
    "assessments": [{
        "topicPolicy": {
            "topics": [{
                "name": "Cars",
                "type": "DENY",
                "action": "BLOCKED"
            }]
        },
        "sensitiveInformationPolicy": {
            "piiEntities": [{
                "type": "NAME",
                "match": "ZAID",
                "action": "ANONYMIZED"
            }],
            "regexes": []
        }
    }]
}
```

------
#### [ Guardrails masks content ]

**Response example**

Guardrails intervenes by masking the name `ZAID`.

```
{
    "usage": {
        "topicPolicyUnitsProcessed": 1,
        "contentPolicyUnitsProcessed": 1,
        "wordPolicyUnitsProcessed": 0,
        "sensitiveInformationPolicyFreeUnits": 0
    },
    "action": "GUARDRAIL_INTERVENED",
    "outputs": [{
            "text": "Hi, my name is {NAME}. Which car brand is reliable?"
        },
        {
            "text": "Hello {NAME}, ABC Cars are reliable ..."
        }
    ],
    "assessments": [{
        "sensitiveInformationPolicy": {
            "piiEntities": [{
                "type": "NAME",
                "match": "ZAID",
                "action": "ANONYMIZED"
            }],
            "regexes": []
        }
    }]
}
```

------
#### [ AWS CLI example ]

**Input example**

```
aws bedrock-runtime apply-guardrail \
    --cli-input-json '{
        "guardrailIdentifier": "someGuardrailId",
        "guardrailVersion": "DRAFT",
        "source": "INPUT",
        "content": [
            {
                "text": {
                    "text": "How should I invest for my retirement? I want to be able to generate $5,000 a month"
                }
            }
        ]
    }' \
    --region us-east-1 \
    --output json
```

**Output example (blocks content)**

```
{
    "usage": {
        "topicPolicyUnits": 1,
        "contentPolicyUnits": 1,
        "wordPolicyUnits": 1,
        "sensitiveInformationPolicyUnits": 1,
        "sensitiveInformationPolicyFreeUnits": 0
    },
    "action": "GUARDRAIL_INTERVENED",
    "outputs": [
        {
            "text": "I apologize, but I am not able to provide fiduciary advice. ="
        }
    ],
    "assessments": [
        {
            "topicPolicy": {
                "topics": [
                    {
                        "name": "Fiduciary Advice",
                        "type": "DENY",
                        "action": "BLOCKED"
                    }
                ]
            }
        }
    ]
}
```

------

## Return full output in ApplyGuardrail response


Content is considered detected if it breaches your guardrail configurations. For example, contextual grounding is considered detected if the grounding or relevance score is less than the corresponding threshold.

By default, the [ApplyGuardrail](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ApplyGuardrail.html) operation only returns detected content in a response. You can specify the `outputScope` field with the `FULL` value to return the full output. In this case, the response will also include non-detected entries for enhanced debugging.

You can configure this same behavior in the `Invoke` and `Converse` operations by setting trace to the enabled full option.

**Note**  
The full output scope doesn't apply to word filters or regex in sensitive information filters. It does apply to all other filtering policies, including sensitive information with filters that can detect personally identifiable information (PII).