Use an inference profile in model invocation
You can use a cross Region inference profile in place of a foundation model to route requests to multiple Regions. To track costs and usage for a model, in one or multiple Regions, you can use an application inference profile. To learn how to use an inference profile when running model inference, choose the tab for your preferred method, and then follow the steps:
- Console
-
To use an inference profile with a feature that supports it, do the following:
-
Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock
. -
Navigate to the page for the feature that you want to use an inference profile for. For example, select Chat / Text playground from the left navigation pane.
-
Choose Select model and then choose the model. For example, choose Amazon and then Nova Premier.
-
Under Inference, select Inference profiles from the dropdown menu.
-
Select the inference profile to use (for example, US Nova Premier) and then choose Apply.
-
- API
-
You can use an inference profile when running inference from any Region that is included in it with the following API operations:
-
InvokeModel or InvokeModelWithResponseStream – To use an inference profile in model invocation, follow the steps at Submit a single prompt with InvokeModel and specify the Amazon Resource Name (ARN) of the inference profile in the
modelId
field. For an example, see Use an inference profile in model invocation. -
Converse or ConverseStream – To use an inference profile in model invocation with the Converse API, follow the steps at Carry out a conversation with the Converse API operations and specify the ARN of the inference profile in the
modelId
field. For an example, see Use an inference profile in a conversation. -
RetrieveAndGenerate – To use an inference profile when generating responses from the results of querying a knowledge base, follow the steps in the API tab in Test your knowledge base with queries and responses and specify the ARN of the inference profile in the
modelArn
field. For more information, see Use an inference proflie to generate a response. -
CreateEvaluationJob – To submit an inference profile for model evaluation, follow the steps in the API tab in Starting an automatic model evaluation job in Amazon Bedrock and specify the ARN of the inference profile in the
modelIdentifier
field. -
CreatePrompt – To use an inference profile when generating a response for a prompt you create in Prompt management, follow the steps in the API tab in Create a prompt using Prompt management and specify the ARN of the inference profile in the
modelId
field. -
CreateFlow – To use an inference profile when generating a response for an inline prompt that you define within a prompt node in a flow, follow the steps in the API tab in Create and design a flow in Amazon Bedrock. In defining the prompt node, specify the ARN of the inference profile in the
modelId
field. -
CreateDataSource – To use an inference profile when parsing non-textual information in a data source, follow the steps in the API section in Parsing options for your data source and specify the ARN of the inference profile in the
modelArn
field.
Note
If you're using a cross-Region (system-defined) inference profile, you can use either the ARN or the ID of the inference profile.
-