interface ManagedWordsConfigProperty
| Language | Type name |
|---|---|
.NET | Amazon.CDK.AWS.Bedrock.CfnGuardrail.ManagedWordsConfigProperty |
Go | github.com/aws/aws-cdk-go/awscdk/v2/awsbedrock#CfnGuardrail_ManagedWordsConfigProperty |
Java | software.amazon.awscdk.services.bedrock.CfnGuardrail.ManagedWordsConfigProperty |
Python | aws_cdk.aws_bedrock.CfnGuardrail.ManagedWordsConfigProperty |
TypeScript | aws-cdk-lib » aws_bedrock » CfnGuardrail » ManagedWordsConfigProperty |
The managed word list to configure for the guardrail.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import { aws_bedrock as bedrock } from 'aws-cdk-lib';
const managedWordsConfigProperty: bedrock.CfnGuardrail.ManagedWordsConfigProperty = {
type: 'type',
// the properties below are optional
inputAction: 'inputAction',
inputEnabled: false,
outputAction: 'outputAction',
outputEnabled: false,
};
Properties
| Name | Type | Description |
|---|---|---|
| type | string | The managed word type to configure for the guardrail. |
| input | string | Specifies the action to take when harmful content is detected in the input. Supported values include:. |
| input | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the input. |
| output | string | Specifies the action to take when harmful content is detected in the output. Supported values include:. |
| output | boolean | IResolvable | Specifies whether to enable guardrail evaluation on the output. |
type
Type:
string
The managed word type to configure for the guardrail.
inputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the input. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
inputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the input.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
outputAction?
Type:
string
(optional)
Specifies the action to take when harmful content is detected in the output. Supported values include:.
BLOCK– Block the content and replace it with blocked messaging.NONE– Take no action but return detection information in the trace response.
outputEnabled?
Type:
boolean | IResolvable
(optional)
Specifies whether to enable guardrail evaluation on the output.
When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

.NET
Go
Java
Python
TypeScript