创建自定义模型(AWS SDK) - Amazon Bedrock

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

创建自定义模型(AWS SDK)

要使用存储在 Amazon S3 中经过 SageMaker AI 训练的 Amazon Nova 模型创建自定义模型,您可以使用 CreateCustomModel API 操作。您可以使用以下代码,通过适用于 Python 的 SDK(Boto3)创建自定义模型。这段代码创建自定义模型,然后检查其状态,直到模型处于 ACTIVE 状态并可供使用。

要使用代码,请更新以下参数。这段示例代码还包括可选参数,例如 clientRequestToken 用于幂等性,modelTags 用于资源标记。

  • modelName – 为模型指定唯一名称。

  • s3Uri – 指定存储模型构件的 Amazon 托管的 Amazon S3 存储桶的路径。当您运行第一个 SageMaker AI 训练作业时,SageMaker AI 会创建这个存储桶。

  • roleArn – 指定 IAM 服务角色的 Amazon 资源名称(ARN),Amazon Bedrock 将代入该角色来代表您执行任务。有关创建此角色的更多信息,请参阅 为导入预训练模型创建服务角色

  • modelKmsKeyArn(可选)– 指定用于在 Amazon Bedrock 中加密模型的 AWS KMS 密钥。如果您未提供 AWS KMS 密钥,Amazon Bedrock 会使用 AWS 托管的 AWS KMS 密钥对模型进行加密。有关加密的信息,请参阅导入的自定义模型的加密

创建自定义模型后,模型将显示在 ListCustomModels 响应中,customizationTypeimported。要跟踪新模型的状态,您可以使用 GetCustomModel API 操作。

import boto3 import uuid from botocore.exceptions import ClientError import time def create_custom_model(bedrock_client): """ Creates a custom model in Amazon Bedrock from a SageMaker AI-trained Amazon Nova model stored in Amazon S3. Args: bedrock_client: The Amazon Bedrock client instance Returns: dict: Response from the CreateCustomModel API call """ try: # Create a unique client request token for idempotency client_request_token = str(uuid.uuid4()) # Define the model source configuration model_source_config = { 's3DataSource': { 's3Uri': 's3://amzn-s3-demo-bucket/folder/', } } # Create the custom model response = bedrock_client.create_custom_model( # Required parameters modelName='modelName', roleArn='serviceRoleArn', modelSourceConfig=model_source_config, # Optional parameters clientRequestToken=client_request_token, modelKmsKeyArn='keyArn', modelTags=[ { 'key': 'Environment', 'value': 'Production' }, { 'key': 'Project', 'value': 'AIInference' } ] ) print(f"Custom model creation initiated. Model ARN: {response['modelArn']}") return response except ClientError as e: print(f"Error creating custom model: {e}") raise def list_custom_models(bedrock_client): """ Lists all custom models in Amazon Bedrock. Args: bedrock_client: An Amazon Bedrock client. Returns: dict: Response from the ListCustomModels API call """ try: response = bedrock_client.list_custom_models() print(f"Total number of custom models: {len(response['modelSummaries'])}") for model in response['modelSummaries']: print("ARN: " + model['modelArn']) print("Name: " + model['modelName']) print("Status: " + model['modelStatus']) print("Customization type: " + model['customizationType']) print("------------------------------------------------------") return response except ClientError as e: print(f"Error listing custom models: {e}") raise def check_model_status(bedrock_client, model_arn): """ Checks the status of a custom model creation. Args: model_arn (str): The ARN of the custom model bedrock_client: An Amazon Bedrock client. Returns: dict: Response from the GetCustomModel API call """ try: max_time = time.time() + 60 * 60 # 1 hour while time.time() < max_time: response = bedrock_client.get_custom_model(modelIdentifier=model_arn) status = response.get('modelStatus') print(f"Job status: {status}") if status == 'Failed': print(f"Failure reason: {response.get('failureMessage')}") break if status == 'Active': print("Model is ready for use.") break time.sleep(60) except ClientError as e: print(f"Error checking model status: {e}") raise def main(): bedrock_client = boto3.client(service_name='bedrock', region_name='REGION') # Create the custom model model_arn = create_custom_model(bedrock_client)["modelArn"] # Check the status of the model if model_arn: check_model_status(bedrock_client, model_arn) # View all custom models list_custom_models(bedrock_client) if __name__ == "__main__": main()