AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Deploys a custom model for on-demand inference in Amazon Bedrock. After you deploy
your custom model, you use the deployment's Amazon Resource Name (ARN) as the modelId
parameter when you submit prompts and generate responses with model inference.
For more information about setting up on-demand inference for custom models, see Set up inference for a custom model.
The following actions are related to the CreateCustomModelDeployment
operation:
For .NET Core this operation is only available in asynchronous form. Please refer to CreateCustomModelDeploymentAsync.
Namespace: Amazon.Bedrock
Assembly: AWSSDK.Bedrock.dll
Version: 3.x.y.z
public virtual CreateCustomModelDeploymentResponse CreateCustomModelDeployment( CreateCustomModelDeploymentRequest request )
Container for the necessary parameters to execute the CreateCustomModelDeployment service method.
Exception | Condition |
---|---|
AccessDeniedException | The request is denied because of missing access permissions. |
InternalServerException | An internal server error occurred. Retry your request. |
ResourceNotFoundException | The specified resource Amazon Resource Name (ARN) was not found. Check the Amazon Resource Name (ARN) and try your request again. |
ServiceQuotaExceededException | The number of requests exceeds the service quota. Resubmit your request later. |
ThrottlingException | The number of requests exceeds the limit. Resubmit your request later. |
TooManyTagsException | The request contains more tags than can be associated with a resource (50 tags per resource). The maximum number of tags includes both existing tags and those included in your current request. |
ValidationException | Input validation failed. Check your request parameters and retry the request. |
.NET Framework:
Supported in: 4.5 and newer, 3.5