interface TopicConfigProperty
| Language | Type name |
|---|---|
.NET | Amazon.CDK.aws_bedrock.CfnGuardrail.TopicConfigProperty |
Go | github.com/aws/aws-cdk-go/awscdk/v2/awsbedrock#CfnGuardrail_TopicConfigProperty |
Java | software.amazon.awscdk.services.bedrock.CfnGuardrail.TopicConfigProperty |
Python | aws_cdk.aws_bedrock.CfnGuardrail.TopicConfigProperty |
TypeScript | aws-cdk-lib » aws_bedrock » CfnGuardrail » TopicConfigProperty |
Details about topics for the guardrail to identify and deny.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import { aws_bedrock as bedrock } from 'aws-cdk-lib';
const topicConfigProperty: bedrock.CfnGuardrail.TopicConfigProperty = {
definition: 'definition',
name: 'name',
type: 'type',
// the properties below are optional
examples: ['examples'],
inputAction: 'inputAction',
inputEnabled: false,
outputAction: 'outputAction',
outputEnabled: false,
};
Properties
| Name | Type | Description |
|---|---|---|
| definition | string | A definition of the topic to deny. |
| name | string | The name of the topic to deny. |
| type | string | Specifies to deny the topic. |
| examples? | string[] | A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic. |
| input | string | Specifies the action to take when harmful content is detected in the input. Supported values include:. |
| input | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the input. |
| output | string | Specifies the action to take when harmful content is detected in the output. Supported values include:. |
| output | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the output. |
definition
Type:
string
A definition of the topic to deny.
name
Type:
string
The name of the topic to deny.
type
Type:
string
Specifies to deny the topic.
examples?
Type:
string[]
(optional)
A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.
inputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the input. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
inputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the input.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
outputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the output. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
outputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the output.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

.NET
Go
Java
Python
TypeScript