本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
创建自定义模型 (AWS SDKs)
要使用存储在 Amazon S3 中的 SageMaker 经过人工智能训练的 Amazon Nova 模型创建自定义模型,请使用 CreateCustomModelAPI 操作。您可以使用以下代码使用适用于 Python 的 SDK (Boto3) 创建自定义模型。该代码创建自定义模型,然后检查其状态,直到模型准备就绪ACTIVE
并可供使用。
要使用该代码,请更新以下参数。该代码示例还包括可选参数,例如clientRequestToken
用于等效性和资源标记modelTags
的参数。
-
模型名称-为模型指定一个唯一的名称。
-
s3uri — 指定存储模型工件的亚马逊托管的 Amazon S3 存储桶的路径。 SageMaker 当你运行第一个 AI 训练作业时, SageMaker AI 会创建这个存储桶。
-
roLearn — 指定 Amazon Bedrock 代替您执行任务的 IAM 服务角色的亚马逊资源名称 (ARN)。有关创建此角色的更多信息,请参阅 创建用于导入预训练模型的服务角色。
-
modelKmsKeyArn(可选)— 在 Amazon Bedrock 中指定用于加密模型的 AWS KMS 密钥。如果您不提供 AWS KMS 密钥,Amazon Bedrock 会使用 AWS托管密 AWS KMS 钥对模型进行加密。有关加密的信息,请参见对导入的自定义模型进行加密。
创建自定义模型后,模型将显示在ListCustomModels响应中,并显示customizationType
为imported
。要跟踪新模型的状态,您可以使用 GetCustomModelAPI 操作。
import boto3 import uuid from botocore.exceptions import ClientError import time def create_custom_model(bedrock_client): """ Creates a custom model in Amazon Bedrock from a SageMaker AI-trained Amazon Nova model stored in Amazon S3. Args: bedrock_client: The Amazon Bedrock client instance Returns: dict: Response from the CreateCustomModel API call """ try: # Create a unique client request token for idempotency client_request_token = str(uuid.uuid4()) # Define the model source configuration model_source_config = { 's3DataSource': { 's3Uri': '
s3://amzn-s3-demo-bucket/folder/
', } } # Create the custom model response = bedrock_client.create_custom_model( # Required parameters modelName='modelName
', roleArn='serviceRoleArn
', modelSourceConfig=model_source_config, # Optional parameters clientRequestToken=client_request_token, modelKmsKeyArn='keyArn
', modelTags=[ { 'key': 'Environment', 'value': 'Production' }, { 'key': 'Project', 'value': 'AIInference' } ] ) print(f"Custom model creation initiated. Model ARN: {response['modelArn']}") return response except ClientError as e: print(f"Error creating custom model: {e}") raise def list_custom_models(bedrock_client): """ Lists all custom models in Amazon Bedrock. Args: bedrock_client: An Amazon Bedrock client. Returns: dict: Response from the ListCustomModels API call """ try: response = bedrock_client.list_custom_models() print(f"Total number of custom models: {len(response['modelSummaries'])}") for model in response['modelSummaries']: print("ARN: " + model['modelArn']) print("Name: " + model['modelName']) print("Status: " + model['modelStatus']) print("Customization type: " + model['customizationType']) print("------------------------------------------------------") return response except ClientError as e: print(f"Error listing custom models: {e}") raise def check_model_status(bedrock_client, model_arn): """ Checks the status of a custom model creation. Args: model_arn (str): The ARN of the custom model bedrock_client: An Amazon Bedrock client. Returns: dict: Response from the GetCustomModel API call """ try: max_time = time.time() + 60 * 60 # 1 hour while time.time() < max_time: response = bedrock_client.get_custom_model(modelIdentifier=model_arn) status = response.get('modelStatus') print(f"Job status: {status}") if status == 'Failed': print(f"Failure reason: {response.get('failureMessage')}") break if status == 'Active': print("Model is ready for use.") break time.sleep(60) except ClientError as e: print(f"Error checking model status: {e}") raise def main(): bedrock_client = boto3.client(service_name='bedrock', region_name='REGION
') # Create the custom model model_arn = create_custom_model(bedrock_client)["modelArn"] # Check the status of the model if model_arn: check_model_status(bedrock_client, model_arn) # View all custom models list_custom_models(bedrock_client) if __name__ == "__main__": main()