ToxicLabels
Important
Service availability notice: Amazon Comprehend topic modeling, event detection, and prompt safety classification features will no longer be available to new customers, effective April 30, 2026. For more information, see Amazon Comprehend feature availability change.
Toxicity analysis result for one string. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide.
Contents
- Labels
-
Array of toxic content types identified in the string.
Type: Array of ToxicContent objects
Required: No
- Toxicity
-
Overall toxicity score for the string. Value range is zero to one, where one is the highest confidence.
Type: Float
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: