/AWS1/IF_REK=>DETECTMODERATIONLABELS()¶
About DetectModerationLabels¶
Detects unsafe content in a specified JPEG or PNG format image. Use
DetectModerationLabels to moderate images depending on your requirements. For
example, you might want to filter images that contain nudity, but not images containing
suggestive content.
To filter images, use the labels returned by DetectModerationLabels to
determine which types of content are appropriate.
For information about moderation labels, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.
You can specify an adapter to use when retrieving label predictions by providing a
ProjectVersionArn to the ProjectVersion argument.
Method Signature¶
METHODS /AWS1/IF_REK~DETECTMODERATIONLABELS
IMPORTING
!IO_IMAGE TYPE REF TO /AWS1/CL_REKIMAGE OPTIONAL
!IV_MINCONFIDENCE TYPE /AWS1/RT_FLOAT_AS_STRING OPTIONAL
!IO_HUMANLOOPCONFIG TYPE REF TO /AWS1/CL_REKHUMANLOOPCONFIG OPTIONAL
!IV_PROJECTVERSION TYPE /AWS1/REKPROJECTVERSIONID OPTIONAL
RETURNING
VALUE(OO_OUTPUT) TYPE REF TO /aws1/cl_rekdetectmderationl01
RAISING
/AWS1/CX_REKACCESSDENIEDEX
/AWS1/CX_REKHLQUOTAEXCEEDEDEX
/AWS1/CX_REKIMAGETOOLARGEEX
/AWS1/CX_REKINTERNALSERVERERR
/AWS1/CX_REKINVIMAGEFORMATEX
/AWS1/CX_REKINVALIDPARAMETEREX
/AWS1/CX_REKINVALIDS3OBJECTEX
/AWS1/CX_REKPROVTHRUPUTEXCDEX
/AWS1/CX_REKRESOURCENOTFOUNDEX
/AWS1/CX_REKRESOURCENOTREADYEX
/AWS1/CX_REKTHROTTLINGEX
/AWS1/CX_REKCLIENTEXC
/AWS1/CX_REKSERVEREXC
/AWS1/CX_RT_TECHNICAL_GENERIC
/AWS1/CX_RT_SERVICE_GENERIC.
IMPORTING¶
Required arguments:¶
io_image TYPE REF TO /AWS1/CL_REKIMAGE /AWS1/CL_REKIMAGE¶
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.
Optional arguments:¶
iv_minconfidence TYPE /AWS1/RT_FLOAT_AS_STRING /AWS1/RT_FLOAT_AS_STRING¶
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value.
If you don't specify
MinConfidence, the operation returns labels with confidence values greater than or equal to 50 percent.
io_humanloopconfig TYPE REF TO /AWS1/CL_REKHUMANLOOPCONFIG /AWS1/CL_REKHUMANLOOPCONFIG¶
Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to.
iv_projectversion TYPE /AWS1/REKPROJECTVERSIONID /AWS1/REKPROJECTVERSIONID¶
Identifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter.
RETURNING¶
oo_output TYPE REF TO /aws1/cl_rekdetectmderationl01 /AWS1/CL_REKDETECTMDERATIONL01¶
Domain /AWS1/RT_ACCOUNT_ID Primitive Type NUMC
Examples¶
Syntax Example¶
This is an example of the syntax for calling the method. It includes every possible argument and initializes every possible value. The data provided is not necessarily semantically accurate (for example the value "string" may be provided for something that is intended to be an instance ID, or in some cases two arguments may be mutually exclusive). The syntax shows the ABAP syntax for creating the various data structures.
DATA(lo_result) = lo_client->detectmoderationlabels(
io_humanloopconfig = new /aws1/cl_rekhumanloopconfig(
io_dataattributes = new /aws1/cl_rekhumanloopdataattrs(
it_contentclassifiers = VALUE /aws1/cl_rekcontclassifiers_w=>tt_contentclassifiers(
( new /aws1/cl_rekcontclassifiers_w( |string| ) )
)
)
iv_flowdefinitionarn = |string|
iv_humanloopname = |string|
)
io_image = new /aws1/cl_rekimage(
io_s3object = new /aws1/cl_reks3object(
iv_bucket = |string|
iv_name = |string|
iv_version = |string|
)
iv_bytes = '5347567362473873563239796247513D'
)
iv_minconfidence = |0.1|
iv_projectversion = |string|
).
This is an example of reading all possible response values
lo_result = lo_result.
IF lo_result IS NOT INITIAL.
LOOP AT lo_result->get_moderationlabels( ) into lo_row.
lo_row_1 = lo_row.
IF lo_row_1 IS NOT INITIAL.
lv_percent = lo_row_1->get_confidence( ).
lv_string = lo_row_1->get_name( ).
lv_string = lo_row_1->get_parentname( ).
lv_uinteger = lo_row_1->get_taxonomylevel( ).
ENDIF.
ENDLOOP.
lv_string = lo_result->get_moderationmodelversion( ).
lo_humanloopactivationoutp = lo_result->get_humanloopactoutput( ).
IF lo_humanloopactivationoutp IS NOT INITIAL.
lv_humanlooparn = lo_humanloopactivationoutp->get_humanlooparn( ).
LOOP AT lo_humanloopactivationoutp->get_humanloopactreasons( ) into lo_row_2.
lo_row_3 = lo_row_2.
IF lo_row_3 IS NOT INITIAL.
lv_humanloopactivationreas = lo_row_3->get_value( ).
ENDIF.
ENDLOOP.
lv_synthesizedjsonhumanloo = lo_humanloopactivationoutp->get_hlactcondsevalresults( ).
ENDIF.
lv_projectversionid = lo_result->get_projectversion( ).
LOOP AT lo_result->get_contenttypes( ) into lo_row_4.
lo_row_5 = lo_row_4.
IF lo_row_5 IS NOT INITIAL.
lv_percent = lo_row_5->get_confidence( ).
lv_string = lo_row_5->get_name( ).
ENDIF.
ENDLOOP.
ENDIF.