ToxicContent - Amazon Comprehend API Reference

ToxicContent

Important

Service availability notice: Amazon Comprehend topic modeling, event detection, and prompt safety classification features will no longer be available to new customers, effective April 30, 2026. For more information, see Amazon Comprehend feature availability change.

Toxic content analysis result for one string. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide

Contents

Name

The name of the toxic content type.

Type: String

Valid Values: GRAPHIC | HARASSMENT_OR_ABUSE | HATE_SPEECH | INSULT | PROFANITY | SEXUAL | VIOLENCE_OR_THREAT

Required: No

Score

Model confidence in the detected content type. Value range is zero to one, where one is highest confidence.

Type: Float

Required: No

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following: