interface ContentFilter
| Language | Type name |
|---|---|
.NET | Amazon.CDK.AWS.Bedrock.Alpha.ContentFilter |
Go | github.com/aws/aws-cdk-go/awsbedrockalpha/v2#ContentFilter |
Java | software.amazon.awscdk.services.bedrock.alpha.ContentFilter |
Python | aws_cdk.aws_bedrock_alpha.ContentFilter |
TypeScript (source) | @aws-cdk/aws-bedrock-alpha ยป ContentFilter |
Interface to declare a content filter.
Example
// Create a guardrail to filter inappropriate content
const guardrail = new bedrock.Guardrail(this, 'bedrockGuardrails', {
guardrailName: 'my-BedrockGuardrails',
description: 'Legal ethical guardrails.',
});
guardrail.addContentFilter({
type: bedrock.ContentFilterType.SEXUAL,
inputStrength: bedrock.ContentFilterStrength.HIGH,
outputStrength: bedrock.ContentFilterStrength.MEDIUM,
});
// Create an agent with the guardrail
const agentWithGuardrail = new bedrock.Agent(this, 'AgentWithGuardrail', {
foundationModel: bedrock.BedrockFoundationModel.ANTHROPIC_CLAUDE_HAIKU_V1_0,
instruction: 'You are a helpful and friendly agent that answers questions about literature.',
guardrail: guardrail,
});
Properties
| Name | Type | Description |
|---|---|---|
| input | Content | The strength of the content filter to apply to prompts / user input. |
| output | Content | The strength of the content filter to apply to model responses. |
| type | Content | The type of harmful category that the content filter is applied to. |
| input | Guardrail | The action to take when content is detected in the input. |
| input | boolean | Whether the content filter is enabled for input. |
| input | Modality[] | The input modalities to apply the content filter to. |
| output | Guardrail | The action to take when content is detected in the output. |
| output | boolean | Whether the content filter is enabled for output. |
| output | Modality[] | The output modalities to apply the content filter to. |
inputStrength
Type:
Content
The strength of the content filter to apply to prompts / user input.
outputStrength
Type:
Content
The strength of the content filter to apply to model responses.
type
Type:
Content
The type of harmful category that the content filter is applied to.
inputAction?
Type:
Guardrail
(optional, default: GuardrailAction.BLOCK)
The action to take when content is detected in the input.
inputEnabled?
Type:
boolean
(optional, default: true)
Whether the content filter is enabled for input.
inputModalities?
Type:
Modality[]
(optional, default: undefined - Applies to text modality)
The input modalities to apply the content filter to.
outputAction?
Type:
Guardrail
(optional, default: GuardrailAction.BLOCK)
The action to take when content is detected in the output.
outputEnabled?
Type:
boolean
(optional, default: true)
Whether the content filter is enabled for output.
outputModalities?
Type:
Modality[]
(optional, default: undefined - Applies to text modality)
The output modalities to apply the content filter to.

.NET
Go
Java
Python
TypeScript (