

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# 使用部署大型模型进行推理 TorchServe
<a name="large-model-inference-tutorials-torchserve"></a>

本教程演示如何使用开启在 Amazon A SageMaker I 中部署大型模型和提供推理。 TorchServe GPUs此示例将 [OPT-30b](https://huggingface.co/facebook/opt-30b) 模型部署到 `ml.g5` 实例。您可以对其进行修改以使用其他模型和实例类型。请将示例中的 `italicized placeholder text` 替换为您自己的信息。

TorchServe 是用于大型分布式模型推理的强大开放平台。通过支持原生 Pi PPy 和 HuggingFace Accelerate 等 PyTorch流行库，它提供了统一的处理程序 APIs ，可在分布式大型模型和非分布式模型推理场景中保持一致。 DeepSpeed有关更多信息，请参阅[TorchServe的大型模型推理文档](https://pytorch.org/serve/large_model_inference.html#)。

## 深度学习容器带有 TorchServe
<a name="large-model-inference-tutorials-torchserve-dlcs"></a>

要 TorchServe 在 SageMaker AI 上部署大型模型，您可以使用其中一个 SageMaker AI 深度学习容器 (DLCs)。默认情况下 TorchServe ，全部安装 AWS PyTorch DLCs。在模型加载过程中， TorchServe 可以安装专为大型模型（例如 Pi PPy、Deepspeed 和 Accelerate）量身定制的专用库。

下表列出了所有使用的 [SageMaker DLCs AI TorchServe](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#sagemaker-framework-containers-sm-support-only)。


| DLC 类别 | 框架 | 硬件 | 示例 URL | 
| --- | --- | --- | --- | 
| [SageMaker AI 框架容器](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#sagemaker-framework-containers-sm-support-only) |  PyTorch 2.0.0\$1  | CPU，GPU | 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:2.0.1-gpu-py310-cu118-ubuntu20.04-sagemaker | 
| [SageMaker AI 框架 Graviton 容器](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#sagemaker-framework-graviton-containers-sm-support-only) |  PyTorch 2.0.0\$1  | CPU | 763104351884.dkr。ecr.us-east-1.amazonaws.com/: 2.0.1-cpu-py310-ubuntu20.04-sagemaker pytorch-inference-graviton | 
| [StabilityAI 推理容器](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#stabilityai-inference-containers) |  PyTorch 2.0.0\$1  | GPU | 763104351884.dkr。ecr.us-east-1.amazonaws.com/: 2.0.1-sgm0.1.0-gpu-py310-cu118-ubuntu20.04-sagemaker stabilityai-pytorch-inference | 
| [Neuron 容器](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#neuron-containers) | PyTorch 1.13.1 | Neuronx | 763104351884.dkr。ecr.us-west-2.amazonaws.com /: 1.13.1-neuron-py310-sdk2.12.0-ubuntu20.04 pytorch-inference-neuron | 

## 开始使用
<a name="large-model-inference-tutorials-torchserve-getting-started"></a>

在部署模型之前，请确保满足先决条件。您还可以配置模型参数并自定义处理程序代码。

### 先决条件
<a name="large-model-inference-tutorials-torchserve-getting-started-prereqs"></a>

要开始部署，请确保您具备以下先决条件：

1. 确保您有权访问 AWS 账户。[设置您的环境](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)，以便 AWS CLI 可以通过 AWS IAM 用户或 IAM 角色访问您的账户。我们建议使用 IAM 角色。为了在您的个人账户中进行测试，您可以将以下托管权限策略附加到 IAM 角色：
   + [AmazonEC2ContainerRegistryFullAccess](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess)
   + [AmazonEC2FullAccess](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/AmazonEC2FullAccess)
   + [AWSServiceRoleForAmazonEKSNodegroup](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/AWSServiceRoleForAmazonEKSNodegroup)
   + [AmazonSageMakerFullAccess](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/AmazonSageMakerFullAccess)
   + [亚马逊 3 FullAccess](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/AmazonS3FullAccess)

   有关将 IAM 策略附加到用户的更多信息，请参阅**《AWS IAM 用户指南》中的[添加和删除 IAM 身份权限](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html)。

1. 在本地配置您的依赖项，如以下示例所示。

   1. 安装以下版本的第 2 版 AWS CLI：

      ```
      # Install the latest AWS CLI v2 if it is not installed
      !curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" !unzip awscliv2.zip
      #Follow the instructions to install v2 on the terminal
      !cat aws/README.md
      ```

   1. 安装 SageMaker AI 和 Boto3 客户端：

      ```
      # If already installed, update your client
      #%pip install sagemaker pip --upgrade --quiet
      !pip install -U sagemaker
      !pip install -U boto
      !pip install -U botocore
      !pip install -U boto3
      ```

### 配置模型设置和参数
<a name="large-model-inference-tutorials-torchserve-getting-started-config"></a>

TorchServe 用于[https://pytorch.org/docs/stable/elastic/run.html](https://pytorch.org/docs/stable/elastic/run.html)为模型并行处理设置分布式环境。 TorchServe 能够为大型模型支持多个工作人员。默认情况下， TorchServe 使用循环算法分配 GPUs 给主机上的工作人员。如果是大型模型推理，则会根据`model_config.yaml`文件中 GPUs 指定的数量自动计算 GPUs 分配给每个 worker 的数量。环境变量`CUDA_VISIBLE_DEVICES`（指定在给定时间可见 IDs 的 GPU 设备）是根据此数字设置的。

例如，假设一个节点 GPUs 上有 8 个，一个工作人员需要在节点 (`nproc_per_node=4`) GPUs 上有 4 个。在这种情况下， GPUs 将四个 TorchServe 分配给第一个工作人员 (`CUDA_VISIBLE_DEVICES="0,1,2,3"`)， GPUs将四个分配给第二个工作人员 (`CUDA_VISIBLE_DEVICES="4,5,6,7”`)。

除此默认行为外， TorchServe 还允许用户灵活地 GPUs 为工作人员指定。例如，如果您在[模型配置 YAML 文件`deviceIds: [2,3,4,5]`](https://github.com/pytorch/serve/blob/5ee02e4f050c9b349025d87405b246e970ee710b/model-archiver/README.md?plain=1#L164)中设置变量，然后设置`nproc_per_node=2`，则 TorchServe 分配`CUDA_VISIBLE_DEVICES=”2,3”`给第一个工作程序和`CUDA_VISIBLE_DEVICES="4,5”`第二个工作程序。

在以下 `model_config.yaml` 示例中，我们为 [OPT-30b](https://huggingface.co/facebook/opt-30b) 模型配置前端和后端参数。配置的前端参数是 `parallelType`、`deviceType`、`deviceIds `和 `torchrun`。有关您可以配置的前端参数的更多详细信息，请参阅[PyTorch GitHub 文档](https://github.com/pytorch/serve/blob/2bf505bae3046b0f7d0900727ec36e611bb5dca3/docs/configuration.md?plain=1#L267)。后端配置基于 YAML 映射，允许自由格式的自定义。对于后端参数，我们定义了自定义处理程序代码使用的 DeepSpeed 配置和其他参数。

```
# TorchServe front-end parameters
minWorkers: 1
maxWorkers: 1
maxBatchDelay: 100
responseTimeout: 1200
parallelType: "tp"
deviceType: "gpu"
# example of user specified GPU deviceIds
deviceIds: [0,1,2,3] # sets CUDA_VISIBLE_DEVICES

torchrun:
    nproc-per-node: 4

# TorchServe back-end parameters
deepspeed:
    config: ds-config.json
    checkpoint: checkpoints.json

handler: # parameters for custom handler code
    model_name: "facebook/opt-30b"
    model_path: "model/models--facebook--opt-30b/snapshots/ceea0a90ac0f6fae7c2c34bcb40477438c152546"
    max_length: 50
    max_new_tokens: 10
    manual_seed: 40
```

### 自定义处理程序
<a name="large-model-inference-tutorials-torchserve-getting-started-handlers"></a>

TorchServe 为使用常用库构建的大型模型推断提供[基础处理](https://github.com/pytorch/serve/tree/master/ts/torch_handler/distributed)[程序和处理程序实用程序](https://github.com/pytorch/serve/tree/master/ts/handler_utils)。以下示例演示了自定义处理程序类是如何[TransformersSeqClassifierHandler](https://github.com/pytorch/serve/blob/ab69b69a59d6ca6074df7e6d4014f07eb48dedba/examples/large_models/deepspeed/custom_handler.py#L16C7-L16C39)扩展[BaseDeepSpeedHandler](https://github.com/pytorch/serve/blob/ab69b69a59d6ca6074df7e6d4014f07eb48dedba/ts/torch_handler/distributed/base_deepspeed_handler.py#L8)和使用[处理程序实用程序](https://github.com/pytorch/serve/blob/master/ts/handler_utils/distributed/deepspeed.py)的。有关完整的代码示例，请参阅[ PyTorch GitHub文档中的`custom_handler.py`代码](https://github.com/pytorch/serve/blob/master/examples/large_models/deepspeed/custom_handler.py)。

```
class TransformersSeqClassifierHandler(BaseDeepSpeedHandler, ABC):
    """
    Transformers handler class for sequence, token classification and question answering.
    """

    def __init__(self):
        super(TransformersSeqClassifierHandler, self).__init__()
        self.max_length = None
        self.max_new_tokens = None
        self.tokenizer = None
        self.initialized = False

    def initialize(self, ctx: Context):
        """In this initialize function, the HF large model is loaded and
        partitioned using DeepSpeed.
        Args:
            ctx (context): It is a JSON Object containing information
            pertaining to the model artifacts parameters.
        """
        super().initialize(ctx)
        model_dir = ctx.system_properties.get("model_dir")
        self.max_length = int(ctx.model_yaml_config["handler"]["max_length"])
        self.max_new_tokens = int(ctx.model_yaml_config["handler"]["max_new_tokens"])
        model_name = ctx.model_yaml_config["handler"]["model_name"]
        model_path = ctx.model_yaml_config["handler"]["model_path"]
        seed = int(ctx.model_yaml_config["handler"]["manual_seed"])
        torch.manual_seed(seed)

        logger.info("Model %s loading tokenizer", ctx.model_name)

        self.tokenizer = AutoTokenizer.from_pretrained(model_name)
        self.tokenizer.pad_token = self.tokenizer.eos_token
        config = AutoConfig.from_pretrained(model_name)
        with torch.device("meta"):
            self.model = AutoModelForCausalLM.from_config(
                config, torch_dtype=torch.float16
            )
        self.model = self.model.eval()

        ds_engine = get_ds_engine(self.model, ctx)
        self.model = ds_engine.module
        logger.info("Model %s loaded successfully", ctx.model_name)
        self.initialized = True

    def preprocess(self, requests):
        """
        Basic text preprocessing, based on the user's choice of application mode.
        Args:
            requests (list): A list of dictionaries with a "data" or "body" field, each
                            containing the input text to be processed.
        Returns:
            tuple: A tuple with two tensors: the batch of input ids and the batch of
                attention masks.
        """

    def inference(self, input_batch):
        """
        Predicts the class (or classes) of the received text using the serialized transformers
        checkpoint.
        Args:
            input_batch (tuple): A tuple with two tensors: the batch of input ids and the batch
                                of attention masks, as returned by the preprocess function.
        Returns:
            list: A list of strings with the predicted values for each input text in the batch.
        """
        
    def postprocess(self, inference_output):
        """Post Process Function converts the predicted response into Torchserve readable format.
        Args:
            inference_output (list): It contains the predicted response of the input text.
        Returns:
            (list): Returns a list of the Predictions and Explanations.
        """
```

## 准备模型构件
<a name="large-model-inference-tutorials-torchserve-artifacts"></a>

在 SageMaker AI 上部署模型之前，必须打包模型工件。对于大型模型，我们建议您使用带有参数的 PyTorch [torch-model-archiver](https://github.com/pytorch/serve/blob/master/model-archiver/README.md)工具`--archive-format no-archive`，这样可以跳过压缩模型工件。以下示例将所有模型构件保存到名为 `opt/` 的新文件夹中。

```
torch-model-archiver --model-name opt --version 1.0 --handler custom_handler.py --extra-files ds-config.json -r requirements.txt --config-file opt/model-config.yaml --archive-format no-archive
```

[创建`opt/`文件夹后，使用 Download\$1Model 工具将 Opt-30b 模型下载到该文件夹。 PyTorch ](https://github.com/pytorch/serve/blob/master/examples/large_models/utils/Download_model.py)

```
cd opt
python path_to/Download_model.py --model_path model --model_name facebook/opt-30b --revision main
```

最后，将模型构件上传到 Amazon S3 存储桶。

```
aws s3 cp opt {your_s3_bucket}/opt --recursive
```

现在，您应该已将模型工件存储在 Amazon S3 中，可以随时部署到 A SageMaker I 终端节点。

## 使用 SageMaker Python 软件开发工具包部署模型
<a name="large-model-inference-tutorials-torchserve-deploy"></a>

准备好模型构件后，您可以将模型部署到 SageMaker AI Hosting 终端节点。本节介绍如何将单个大型模型部署到端点并进行流式响应预测。有关从端点进行流式响应的更多信息，请参阅[调用实时端点](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-test-endpoints.html)。

要部署模型，请完成以下步骤：

1. 创建 A SageMaker I 会话，如以下示例所示。

   ```
   import boto3
   import sagemaker
   from sagemaker import Model, image_uris, serializers, deserializers
   
   boto3_session=boto3.session.Session(region_name="us-west-2")
   smr = boto3.client('sagemaker-runtime-demo')
   sm = boto3.client('sagemaker')
   role = sagemaker.get_execution_role()  # execution role for the endpoint
   sess= sagemaker.session.Session(boto3_session, sagemaker_client=sm, sagemaker_runtime_client=smr)  # SageMaker AI session for interacting with different AWS APIs
   region = sess._region_name  # region name of the current SageMaker Studio Classic environment
   account = sess.account_id()  # account_id of the current SageMaker Studio Classic environment
   
   # Configuration:
   bucket_name = sess.default_bucket()
   prefix = "torchserve"
   output_path = f"s3://{bucket_name}/{prefix}"
   print(f'account={account}, region={region}, role={role}, output_path={output_path}')
   ```

1. 在 SageMaker AI 中创建未压缩的模型，如以下示例所示。

   ```
   from datetime import datetime
   
   instance_type = "ml.g5.24xlarge"
   endpoint_name = sagemaker.utils.name_from_base("ts-opt-30b")
   s3_uri = {your_s3_bucket}/opt
   
   model = Model(
       name="torchserve-opt-30b" + datetime.now().strftime("%Y-%m-%d-%H-%M-%S"),
       # Enable SageMaker uncompressed model artifacts
       model_data={
           "S3DataSource": {
                   "S3Uri": s3_uri,
                   "S3DataType": "S3Prefix",
                   "CompressionType": "None",
           }
       },
       image_uri=container,
       role=role,
       sagemaker_session=sess,
       env={"TS_INSTALL_PY_DEP_PER_MODEL": "true"},
   )
   print(model)
   ```

1. 将模型部署到 Amazon EC2 实例，如以下示例所示。

   ```
   model.deploy(
       initial_instance_count=1,
       instance_type=instance_type,
       endpoint_name=endpoint_name,
       volume_size=512, # increase the size to store large model
       model_data_download_timeout=3600, # increase the timeout to download large model
       container_startup_health_check_timeout=600, # increase the timeout to load large model
   )
   ```

1. 初始化类以处理流式响应，如以下示例所示。

   ```
   import io
   
   class Parser:
       """
       A helper class for parsing the byte stream input. 
       
       The output of the model will be in the following format:
       ```
       b'{"outputs": [" a"]}\n'
       b'{"outputs": [" challenging"]}\n'
       b'{"outputs": [" problem"]}\n'
       ...
       ```
       
       While usually each PayloadPart event from the event stream will contain a byte array 
       with a full json, this is not guaranteed and some of the json objects may be split across
       PayloadPart events. For example:
       ```
       {'PayloadPart': {'Bytes': b'{"outputs": '}}
       {'PayloadPart': {'Bytes': b'[" problem"]}\n'}}
       ```
       
       This class accounts for this by concatenating bytes written via the 'write' function
       and then exposing a method which will return lines (ending with a '\n' character) within
       the buffer via the 'scan_lines' function. It maintains the position of the last read 
       position to ensure that previous bytes are not exposed again. 
       """
       
       def __init__(self):
           self.buff = io.BytesIO()
           self.read_pos = 0
           
       def write(self, content):
           self.buff.seek(0, io.SEEK_END)
           self.buff.write(content)
           data = self.buff.getvalue()
           
       def scan_lines(self):
           self.buff.seek(self.read_pos)
           for line in self.buff.readlines():
               if line[-1] != b'\n':
                   self.read_pos += len(line)
                   yield line[:-1]
                   
       def reset(self):
           self.read_pos = 0
   ```

1. 测试流式响应预测，如以下示例所示。

   ```
   import json
   
   body = "Today the weather is really nice and I am planning on".encode('utf-8')
   resp = smr.invoke_endpoint_with_response_stream(EndpointName=endpoint_name, Body=body, ContentType="application/json")
   event_stream = resp['Body']
   parser = Parser()
   for event in event_stream:
       parser.write(event['PayloadPart']['Bytes'])
       for line in parser.scan_lines():
           print(line.decode("utf-8"), end=' ')
   ```

现在，您已将模型部署到 A SageMaker I 终端节点，并且应该能够调用它来获取响应。有关 SageMaker AI 实时终端节点的更多信息，请参阅[单一模型端点](realtime-single-model.md)。