/AWS1/CL_SGM=>CREATEINFERENCECOMPONENT()
¶
About CreateInferenceComponent¶
Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.
Method Signature¶
IMPORTING¶
Required arguments:¶
iv_inferencecomponentname
TYPE /AWS1/SGMINFERENCECOMPONENTN00
/AWS1/SGMINFERENCECOMPONENTN00
¶
A unique name to assign to the inference component.
iv_endpointname
TYPE /AWS1/SGMENDPOINTNAME
/AWS1/SGMENDPOINTNAME
¶
The name of an existing endpoint where you host the inference component.
io_specification
TYPE REF TO /AWS1/CL_SGMINFERENCECOMPONE00
/AWS1/CL_SGMINFERENCECOMPONE00
¶
Details about the resources to deploy with this inference component, including the model, container, and compute resources.
Optional arguments:¶
iv_variantname
TYPE /AWS1/SGMVARIANTNAME
/AWS1/SGMVARIANTNAME
¶
The name of an existing production variant where you host the inference component.
io_runtimeconfig
TYPE REF TO /AWS1/CL_SGMINFERENCECOMPONE04
/AWS1/CL_SGMINFERENCECOMPONE04
¶
Runtime settings for a model that is deployed with an inference component.
it_tags
TYPE /AWS1/CL_SGMTAG=>TT_TAGLIST
TT_TAGLIST
¶
A list of key-value pairs associated with the model. For more information, see Tagging Amazon Web Services resources in the Amazon Web Services General Reference.
RETURNING¶
oo_output
TYPE REF TO /aws1/cl_sgmcreinferencecomp01
/AWS1/CL_SGMCREINFERENCECOMP01
¶
Domain /AWS1/RT_ACCOUNT_ID Primitive Type NUMC
Examples¶
Syntax Example¶
This is an example of the syntax for calling the method. It includes every possible argument and initializes every possible value. The data provided is not necessarily semantically accurate (for example the value "string" may be provided for something that is intended to be an instance ID, or in some cases two arguments may be mutually exclusive). The syntax shows the ABAP syntax for creating the various data structures.
DATA(lo_result) = lo_client->/aws1/if_sgm~createinferencecomponent(
io_runtimeconfig = new /aws1/cl_sgminferencecompone04( 123 )
io_specification = new /aws1/cl_sgminferencecompone00(
io_computeresrcrequirements = new /aws1/cl_sgminferencecompone03(
iv_maxmemoryrequiredinmb = 123
iv_minmemoryrequiredinmb = 123
iv_numberofcpucoresrequired = '0.1'
iv_numofacceleratordevsreq00 = '0.1'
)
io_container = new /aws1/cl_sgminferencecompone01(
it_environment = VALUE /aws1/cl_sgmenvironmentmap_w=>tt_environmentmap(
(
VALUE /aws1/cl_sgmenvironmentmap_w=>ts_environmentmap_maprow(
value = new /aws1/cl_sgmenvironmentmap_w( |string| )
key = |string|
)
)
)
iv_artifacturl = |string|
iv_image = |string|
)
io_startupparameters = new /aws1/cl_sgminferencecompone02(
iv_containerstrtuphealthch00 = 123
iv_mdeldatadownloadtmoutin00 = 123
)
iv_baseinferencecomponentn00 = |string|
iv_modelname = |string|
)
it_tags = VALUE /aws1/cl_sgmtag=>tt_taglist(
(
new /aws1/cl_sgmtag(
iv_key = |string|
iv_value = |string|
)
)
)
iv_endpointname = |string|
iv_inferencecomponentname = |string|
iv_variantname = |string|
).
This is an example of reading all possible response values
lo_result = lo_result.
IF lo_result IS NOT INITIAL.
lv_inferencecomponentarn = lo_result->get_inferencecomponentarn( ).
ENDIF.