interface TopicConfigProperty
| Language | Type name |
|---|---|
.NET | Amazon.CDK.Mixins.Preview.AWS.Bedrock.Mixins.CfnGuardrailPropsMixin.TopicConfigProperty |
Go | github.com/aws/aws-cdk-go/awscdkmixinspreview/v2/awsbedrock/mixins#CfnGuardrailPropsMixin_TopicConfigProperty |
Java | software.amazon.awscdk.mixins.preview.services.bedrock.mixins.CfnGuardrailPropsMixin.TopicConfigProperty |
Python | aws_cdk.mixins_preview.aws_bedrock.mixins.CfnGuardrailPropsMixin.TopicConfigProperty |
TypeScript | @aws-cdk/mixins-preview » aws_bedrock » mixins » CfnGuardrailPropsMixin » TopicConfigProperty |
Details about topics for the guardrail to identify and deny.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import { mixins as bedrock_mixins } from '@aws-cdk/mixins-preview/aws-bedrock';
const topicConfigProperty: bedrock_mixins.CfnGuardrailPropsMixin.TopicConfigProperty = {
definition: 'definition',
examples: ['examples'],
inputAction: 'inputAction',
inputEnabled: false,
name: 'name',
outputAction: 'outputAction',
outputEnabled: false,
type: 'type',
};
Properties
| Name | Type | Description |
|---|---|---|
| definition? | string | A definition of the topic to deny. |
| examples? | string[] | A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic. |
| input | string | Specifies the action to take when harmful content is detected in the input. Supported values include:. |
| input | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the input. |
| name? | string | The name of the topic to deny. |
| output | string | Specifies the action to take when harmful content is detected in the output. Supported values include:. |
| output | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the output. |
| type? | string | Specifies to deny the topic. |
definition?
Type:
string
(optional)
A definition of the topic to deny.
examples?
Type:
string[]
(optional)
A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.
inputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the input. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
inputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the input.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
name?
Type:
string
(optional)
The name of the topic to deny.
outputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the output. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
outputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the output.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
type?
Type:
string
(optional)
Specifies to deny the topic.

.NET
Go
Java
Python
TypeScript