interface ManagedWordsConfigProperty
| Language | Type name |
|---|---|
.NET | Amazon.CDK.Mixins.Preview.AWS.Bedrock.Mixins.CfnGuardrailPropsMixin.ManagedWordsConfigProperty |
Go | github.com/aws/aws-cdk-go/awscdkmixinspreview/v2/awsbedrock/mixins#CfnGuardrailPropsMixin_ManagedWordsConfigProperty |
Java | software.amazon.awscdk.mixins.preview.services.bedrock.mixins.CfnGuardrailPropsMixin.ManagedWordsConfigProperty |
Python | aws_cdk.mixins_preview.aws_bedrock.mixins.CfnGuardrailPropsMixin.ManagedWordsConfigProperty |
TypeScript | @aws-cdk/mixins-preview » aws_bedrock » mixins » CfnGuardrailPropsMixin » ManagedWordsConfigProperty |
The managed word list to configure for the guardrail.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import { mixins as bedrock_mixins } from '@aws-cdk/mixins-preview/aws-bedrock';
const managedWordsConfigProperty: bedrock_mixins.CfnGuardrailPropsMixin.ManagedWordsConfigProperty = {
inputAction: 'inputAction',
inputEnabled: false,
outputAction: 'outputAction',
outputEnabled: false,
type: 'type',
};
Properties
| Name | Type | Description |
|---|---|---|
| input | string | Specifies the action to take when harmful content is detected in the input. Supported values include:. |
| input | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the input. |
| output | string | Specifies the action to take when harmful content is detected in the output. Supported values include:. |
| output | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the output. |
| type? | string | The managed word type to configure for the guardrail. |
inputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the input. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
inputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the input.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
outputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the output. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
outputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the output.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
type?
Type:
string
(optional)
The managed word type to configure for the guardrail.

.NET
Go
Java
Python
TypeScript