GuardrailAction
- class aws_cdk.aws_bedrock_alpha.GuardrailAction(*values)
Bases:
Enum
(experimental) Guardrail action when a sensitive entity is detected.
- Stability:
experimental
- ExampleMetadata:
fixture=default infused
Example:
guardrail = bedrock.Guardrail(self, "bedrockGuardrails", guardrail_name="my-BedrockGuardrails", # Configure tier for topic filters (optional) topics_tier_config=bedrock.TierConfig.STANDARD ) # Use a predefined topic guardrail.add_denied_topic_filter(bedrock.Topic.FINANCIAL_ADVICE) # Create a custom topic with input/output actions guardrail.add_denied_topic_filter( bedrock.Topic.custom( name="Legal_Advice", definition="Offering guidance or suggestions on legal matters, legal actions, interpretation of laws, or legal rights and responsibilities.", examples=["Can I sue someone for this?", "What are my legal rights in this situation?", "Is this action against the law?", "What should I do to file a legal complaint?", "Can you explain this law to me?" ], # props below are optional input_action=bedrock.GuardrailAction.BLOCK, input_enabled=True, output_action=bedrock.GuardrailAction.NONE, output_enabled=True ))
Attributes
- ANONYMIZE
(experimental) If sensitive information is detected in the model response, the guardrail masks it with an identifier, the sensitive information is masked and replaced with identifier tags (for example: [NAME-1], [NAME-2], [EMAIL-1], etc.).
- Stability:
experimental
- BLOCK
(experimental) If sensitive information is detected in the prompt or response, the guardrail blocks all the content and returns a message that you configure.
- Stability:
experimental
- NONE
(experimental) Do not take any action.
- Stability:
experimental