

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# 支援的架構 AWS 區域、執行個體類型和已測試的模型
<a name="training-compiler-support"></a>

**重要**  
Amazon Web Services (AWS) 宣佈不再推出新版本的 SageMaker Training Compiler。您可以透過現有的 SageMaker 訓練 AWS 深度學習容器 (DLC)，繼續利用 SageMaker Training Compiler。請務必注意，雖然現有 DLCs仍可存取，但根據深度學習容器架構支援政策 AWS，他們將不再收到來自 的修補程式或更新。 [AWS](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/support-policy.html)

使用 SageMaker Training Compiler 之前，請檢查您選擇的架構是否受到支援、您的 AWS 帳戶中是否有可用的執行個體類型，以及 AWS 您的帳戶是否位於其中一個支援的 中 AWS 區域。

**注意**  
SageMaker Training Compiler 適用 SageMaker Python SDK v2.70.0 或更新版本。

## 支援的架構
<a name="training-compiler-supported-frameworks"></a>

SageMaker Training Compiler 支援下列深度學習架構，並可透過 AWS 深度學習容器取得。

**Topics**
+ [PyTorch](#training-compiler-supported-frameworks-pytorch)
+ [TensorFlow](#training-compiler-supported-frameworks-tensorflow)

### PyTorch
<a name="training-compiler-supported-frameworks-pytorch"></a>

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/sagemaker/latest/dg/training-compiler-support.html)

### TensorFlow
<a name="training-compiler-supported-frameworks-tensorflow"></a>

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/sagemaker/latest/dg/training-compiler-support.html)

有關詳細資訊，請參閱 *AWS 深度學習容器 GitHub 儲存庫*的[可用映像](https://github.com/aws/deep-learning-containers/blob/master/available_images.md)。

## AWS 區域
<a name="training-compiler-availablity-zone"></a>

[SageMaker Training Compiler Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#sagemaker-training-compiler-containers) 可在[AWS 深度學習容器](https://github.com/aws/deep-learning-containers/blob/master/available_images.md)提供服務的 中 AWS 區域 取得，但中國區域除外。

## 支援的執行個體類型
<a name="training-compiler-supported-instance-types"></a>

SageMaker Training Compiler 已在下列機器學習 (ML) 執行個體類型進行測試並提供支援。
+ P4 執行個體
+ P3 執行個體
+ G4dn 執行個體
+ G5 執行個體

如需執行個體類型規格，請參閱 [Amazon EC2 執行個體類型頁面](https://aws.amazon.com/ec2/instance-types/)的**加速運算**一節。有關執行個體定價資訊，請參閱 [Amazon SageMaker 定價](https://aws.amazon.com/sagemaker/pricing/)。

如果您遇到類似下列內容的錯誤訊息，請按照[請求 SageMaker AI 資源服務配額增加](https://docs.aws.amazon.com/sagemaker/latest/dg/regions-quotas.html#service-limit-increase-request-procedure)的指示進行操作。

```
ResourceLimitExceeded: An error occurred (ResourceLimitExceeded) when calling
the CreateTrainingJob operation: The account-level service limit 'ml.p3dn.24xlarge
for training job usage' is 0 Instances, with current utilization of 0 Instances
and a request delta of 1 Instances.
Please contact AWS support to request an increase for this limit.
```

## 測試過的模型
<a name="training-compiler-tested-models"></a>

下表所包含的模型清單均已採用 SageMaker Training Compiler 進行測試。同時列出記憶體可容納的最大批次大小與其他訓練參數以供參考。SageMaker Training Compiler 可變更模型訓練程序的記憶體用量；因此，在訓練過程通常可使用較大批次大小，進一步減少總訓練時間。在某些情況，SageMaker Training Compiler 會以智慧方式提升快取，因而減少 GPU 可容納的最大批次大小。您必須重新調整模型超參數，找到適合您案例的最佳批次大小。若要節省時間，請利用下列參考資料表來查詢批次大小，作為您使用案例的良好起點。

**注意**  
這些批次大小是本機批次大小，適合個別執行個體類型的每個單獨 GPU。在變更批次大小時，您也應調整學習速率。

### PyTorch 1.13.1
<a name="training-compiler-tested-models-pt1131"></a>

**自然語言處理 (NLP) 模型**

下列模型已採用單或多 GPU 核心及自動混合精度 (AMP) 針對所有單節點與多節點組合的訓練工作進行測試，如下所示。


| 單節點/多節點，單 GPU/多 GPU | 模型 | 資料集 | 執行個體類型 | 精確度 | 序列長度 | 原生架構的批次大小  | SageMaker Training Compiler 的批次大小  | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| albert-base-v2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 80 | 192 | 
| albert-base-v2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 | 332 | 
| albert-base-v2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 80 | 224 | 
| bert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 160 | 288 | 
| camembert-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 160 | 280 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 240 | 472 | 
| distilgpt2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 77 | 128 | 
| distilgpt2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 138 | 390 | 
| distilgpt2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 96 | 256 | 
| distilroberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 96 | 192 | 
| distilroberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 171 | 380 | 
| distilroberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 112 | 256 | 
| gpt2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 52 | 152 | 
| gpt2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 84 | 240 | 
| gpt2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 58 | 164 | 
| microsoft/deberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 48 | 128 | 
| microsoft/deberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 84 | 207 | 
| microsoft/deberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 53 | 133 | 
| roberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 125 | 224 | 
| xlm-roberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 16 | 31 | 
| xlm-roberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 18 | 50 | 
| xlnet-base-cased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 | 240 | 
| bert-base-uncased | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 29 | 50 | 
| distilbert-base-uncased | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 45 | 64 | 
| gpt2 | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 18 | 45 | 
| roberta-base | wikitext-103-v1 | g5.48xlarge | float16 | 512 | 23 | 44 | 
| gpt2 | wikitext-103-v1 | p4d.24xlarge | float16 | 512 | 36 | 64 | 

**電腦視覺 (CV) 模型**

已採用具有自動混合精度 (AMP) 的 [TensorFlow Model Garden](https://github.com/tensorflow/models)進行測試，如下所示。


| 單/多節點，單/多 GPU | 模型 | 資料集 | 執行個體類型 | 精確度 | 原生架構的批次大小  | SageMaker Training Compiler 的批次大小  | 
| --- | --- | --- | --- | --- | --- | --- | 
| ResNet152 | food101 | g4dn.16xlarge | float16 | 128 | 144 | 
| ResNet152 | food101 | g5.4xlarge | float16 | 128 | 192 | 
| ResNet152 | food101 | p3.2xlarge | float16 | 152 | 156 | 
| ViT | food101 | g4dn.16xlarge | float16 | 512 | 512 | 
| ViT | food101 | g5.4xlarge | float16 | 992 | 768 | 
| ViT | food101 | p3.2xlarge | float16 | 848 | 768 | 

### PyTorch 1.12.0
<a name="training-compiler-tested-models-pt1120"></a>

**自然語言處理 (NLP) 模型**

下列模型已採用單或多 GPU 核心及自動混合精度 (AMP) 針對所有單節點與多節點組合的訓練工作進行測試，如下所示。


| 單節點/多節點，單 GPU/多 GPU | 模型 | 資料集 | 執行個體類型 | 精確度 | 序列長度 | 原生架構的批次大小  | SageMaker Training Compiler 的批次大小  | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| albert-base-v2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 128 | 248 | 
| bert-base-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 160 | 288 | 
| camembert-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 160 | 279 | 
| camembert-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 105 | 164 | 
| distilgpt2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 136 | 256 | 
| distilgpt2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 80 | 118 | 
| gpt2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 84 | 240 | 
| gpt2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 80 | 119 | 
| microsoft/deberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 93 | 197 | 
| microsoft/deberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 113 | 130 | 
| roberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 125 | 224 | 
| roberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 78 | 112 | 
| xlnet-base-cased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 138 | 240 | 
| bert-base-uncased | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 |  | 52 | 
| distilbert-base-uncased | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 |  | 160 | 
| gpt2 | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 |  | 25 | 
| roberta-base | wikitext-103-v1 | ml.p4d.24xlarge | float16 | 512 |  | 64 | 

### TensorFlow 2.11.0
<a name="training-compiler-tested-models-tf2110"></a>

**電腦視覺 (CV) 模型**

已採用具有自動混合精度 (AMP) 的 [TensorFlow Model Garden](https://github.com/tensorflow/models)進行測試，如下所示。


| 單/多節點，單/多 GPU | 模型 | 資料集 | 執行個體類型 | 精確度 | 原生架構的批次大小  | SageMaker Training Compiler 的批次大小  | 
| --- | --- | --- | --- | --- | --- | --- | 
| MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g5.2xlarge | float16 | 6 | 8 | 
| MaskRCNN-ResNet50-FPN | COCO-2017 | ml.p3.2xlarge | float16 | 4 | 6 | 
| ResNet50 | ImageNet | ml.g5.2xlarge | float16 | 192 | 256 | 
| ResNet50 | ImageNet | ml.p3.2xlarge | float16 | 256 | 256 | 
| ResNet101 | ImageNet | ml.g5.2xlarge | float16 | 128 | 256 | 
| ResNet101 | ImageNet | ml.p3.2xlarge | float16 | 128 | 128 | 
| ResNet152 | ImageNet | ml.g5.2xlarge | float16 | 128 | 224 | 
| ResNet152 | ImageNet | ml.p3.2xlarge | float16 | 128 | 128 | 
| VisionTransformer | ImageNet | ml.g5.2xlarge | float16 | 112 | 144 | 
| VisionTransformer | ImageNet | ml.p3.2xlarge | float16 | 96 | 128 | 

**自然語言處理 (NLP) 模型**

採用具 `Sequence_Len=128` 與自動混合精度 (AMP) 的[轉換器模型](https://github.com/huggingface/transformers)進行測試，如下所示。


| 單/多節點，單/多 GPU | 模型 | 資料集 | 執行個體類型 | 精確度 | 原生架構的批次大小  | SageMaker Training Compiler 的批次大小  | 
| --- | --- | --- | --- | --- | --- | --- | 
| albert-base-v2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 160 | 197 | 
| albert-base-v2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 95 | 127 | 
| bert-base-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 160 | 128 | 
| bert-base-uncased | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 104 | 111 | 
| bert-large-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 65 | 48 | 
| bert-large-uncased | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 40 | 35 | 
| camembert-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 162 | 
| camembert-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 105 | 111 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 256 | 264 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 128 | 169 | 
| gpt2 | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 120 | 
| gpt2 | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 80 | 83 | 
| jplu/tf-xlm-roberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 32 | 32 | 
| jplu/tf-xlm-roberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 32 | 36 | 
| microsoft/mpnet-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 144 | 160 | 
| microsoft/mpnet-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 106 | 110 | 
| roberta-base | wikitext-2-raw-v1 | ml.g5.2xlarge | float16 | 128 | 128 | 
| roberta-base | wikitext-2-raw-v1 | ml.p3.2xlarge | float16 | 72 | 98 | 
| albert-base-v2 | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 128 | 192 | 
| albert-base-v2 | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 95 | 96 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 256 | 256 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 140 | 184 | 
| google/electra-small-discriminator | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 256 | 384 | 
| google/electra-small-discriminator | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 256 | 268 | 
| gpt2 | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 116 | 116 | 
| gpt2 | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 85 | 83 | 
| gpt2 | wikitext-2-raw-v1 | ml.p4d.24xlarge | float16 | 94 | 110 | 
| microsoft/mpnet-base | wikitext-2-raw-v1 | ml.g5.48xlarge | float16 | 187 | 164 | 
| microsoft/mpnet-base | wikitext-2-raw-v1 | ml.p3.16xlarge | float16 | 106 | 111 | 

### TensorFlow 2.10.0
<a name="training-compiler-tested-models-tf2100"></a>

**電腦視覺 (CV) 模型**

已採用具有自動混合精度 (AMP) 的 [TensorFlow Model Garden](https://github.com/tensorflow/models)進行測試，如下所示。


| 單節點，單 GPU/多 GPU | 模型 | 資料集 | 執行個體類型 | 精確度 | 原生架構的批次大小  | SageMaker Training Compiler 的批次大小  | 
| --- | --- | --- | --- | --- | --- | --- | 
| DetectionTransformer-ResNet50 | COCO-2017 | ml.g4dn.2xlarge | float32 | 2 | 4 | 
| DetectionTransformer-ResNet50 | COCO-2017 | ml.g5.2xlarge | float32 | 3 | 6 | 
| DetectionTransformer-ResNet50 | COCO-2017 | ml.p3.2xlarge | float32 | 2 | 4 | 
| MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g4dn.2xlarge | float16 | 4 | 6 | 
| MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g5.2xlarge | float16 | 6 | 8 | 
| MaskRCNN-ResNet50-FPN | COCO-2017 | ml.g5.48xlarge | float16 | 48 | 64 | 
| MaskRCNN-ResNet50-FPN | COCO-2017 | ml.p3.2xlarge | float16 | 4 | 6 | 
| ResNet50 | ImageNet | ml.g4dn.2xlarge | float16 | 224 | 256 | 
| ResNet50 | ImageNet | ml.g5.2xlarge | float16 | 192 | 160 | 
| ResNet50 | ImageNet | ml.g5.48xlarge | float16 | 2048 | 2048 | 
| ResNet50 | ImageNet | ml.p3.2xlarge | float16 | 224 | 160 | 
| ResNet101 | ImageNet | ml.g4dn.2xlarge | float16 | 160 | 128 | 
| ResNet101 | ImageNet | ml.g5.2xlarge | float16 | 192 | 256 | 
| ResNet101 | ImageNet | ml.g5.48xlarge | float16 | 2048 | 2048 | 
| ResNet101 | ImageNet | ml.p3.2xlarge | float16 | 160 | 224 | 
| ResNet152 | ImageNet | ml.g4dn.2xlarge | float16 | 128 | 128 | 
| ResNet152 | ImageNet | ml.g5.2xlarge | float16 | 192 | 224 | 
| ResNet152 | ImageNet | ml.g5.48xlarge | float16 | 1536 | 1792 | 
| ResNet152 | ImageNet | ml.p3.2xlarge | float16 | 128 | 160 | 
| VisionTransformer | ImageNet | ml.g4dn.2xlarge | float16 | 80 | 128 | 
| VisionTransformer | ImageNet | ml.g5.2xlarge | float16 | 112 | 144 | 
| VisionTransformer | ImageNet | ml.g5.48xlarge | float16 | 896 | 1152 | 
| VisionTransformer | ImageNet | ml.p3.2xlarge | float16 | 80 | 128 | 

**自然語言處理 (NLP) 模型**

採用具 `Sequence_Len=128` 與自動混合精度 (AMP) 的[轉換器模型](https://github.com/huggingface/transformers)進行測試，如下所示。


| 單節點，單 GPU/多 GPU | 模型 | 資料集 | 執行個體類型 | 精確度 | 原生架構的批次大小  | SageMaker Training Compiler 的批次大小  | 
| --- | --- | --- | --- | --- | --- | --- | 
| albert-base-v2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 128 | 112 | 
| albert-base-v2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 128 | 
| albert-base-v2 | wikitext-2-raw-v1 | p3.8xlarge | float16 | 128 | 135 | 
| albert-base-v2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 191 | 
| bert-base-uncased | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 64 | 94 | 
| bert-base-uncased | wikitext-2-raw-v1 | p3.2xlarge | float16 | 96 | 101 | 
| bert-base-uncased | wikitext-2-raw-v1 | p3.8xlarge | float16 | 96 | 96 | 
| bert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 | 
| bert-large-uncased | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 35 | 21 | 
| bert-large-uncased | wikitext-2-raw-v1 | p3.2xlarge | float16 | 39 | 26 | 
| bert-large-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 60 | 50 | 
| camembert-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 96 | 90 | 
| camembert-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 96 | 98 | 
| camembert-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 96 | 96 | 
| camembert-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 256 | 160 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | p3.2xlarge | float16 | 128 | 176 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | p3.8xlarge | float16 | 128 | 160 | 
| distilbert-base-uncased | wikitext-2-raw-v1 | g5.4xlarge | float16 | 256 | 258 | 
| google\$1electra-small-discriminator | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 256 | 216 | 
| google\$1electra-small-discriminator | wikitext-2-raw-v1 | p3.2xlarge | float16 | 256 | 230 | 
| google\$1electra-small-discriminator | wikitext-2-raw-v1 | p3.8xlarge | float16 | 256 | 224 | 
| google\$1electra-small-discriminator | wikitext-2-raw-v1 | g5.4xlarge | float16 | 256 | 320 | 
| gpt2 | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 80 | 64 | 
| gpt2 | wikitext-2-raw-v1 | p3.2xlarge | float16 | 80 | 77 | 
| gpt2 | wikitext-2-raw-v1 | p3.8xlarge | float16 | 80 | 72 | 
| gpt2 | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 120 | 
| jplu\$1tf-xlm-roberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 28 | 24 | 
| jplu\$1tf-xlm-roberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 32 | 24 | 
| jplu\$1tf-xlm-roberta-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 32 | 26 | 
| jplu\$1tf-xlm-roberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 66 | 52 | 
| microsoft\$1mpnet-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 96 | 92 | 
| microsoft\$1mpnet-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 96 | 101 | 
| microsoft\$1mpnet-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 96 | 101 | 
| microsoft\$1mpnet-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 152 | 
| roberta-base | wikitext-2-raw-v1 | g4dn.16xlarge | float16 | 64 | 72 | 
| roberta-base | wikitext-2-raw-v1 | p3.2xlarge | float16 | 64 | 84 | 
| roberta-base | wikitext-2-raw-v1 | p3.8xlarge | float16 | 64 | 86 | 
| roberta-base | wikitext-2-raw-v1 | g5.4xlarge | float16 | 128 | 128 | 

### TensorFlow 2.9.1
<a name="training-compiler-tested-models-tf291"></a>

已採用具有自動混合精度 (AMP) 的 [TensorFlow Model Garden](https://github.com/tensorflow/models)進行測試。

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/sagemaker/latest/dg/training-compiler-support.html)

\$1 標有星號 (\$1) 的批次大小表示由 SageMaker Training Compiler 開發人員團隊測試的最大批次大小。對於標記的儲存格，執行個體可能可容納比指示更大的批次大小。

### 轉換器 4.21.1，搭配 PyTorch 1.11.0
<a name="training-compiler-tested-models-hf421-pt111"></a>

已採用 `Sequence_Len=512` 與自動混合精度 (AMP) 進行測試。

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/sagemaker/latest/dg/training-compiler-support.html)

### 轉換器 4.17.0，搭配 PyTorch 1.10.2
<a name="training-compiler-tested-models-hf417-pt110"></a>

已採用 `Sequence_Len=512` 與自動混合精度 (AMP) 進行測試。

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_tw/sagemaker/latest/dg/training-compiler-support.html)

### 轉換器 4.11.0，搭配 PyTorch 1.9.0
<a name="training-compiler-tested-models-hf411-pt190"></a>

已採用 `Sequence_Len=512` 與自動混合精度 (AMP) 進行測試。


| 單節點，單 GPU | 模型  | 執行個體類型 | 原生批次大小 | Training Compiler 的批次大小 | 
| --- | --- | --- | --- | --- | 
| albert-base-v2  | ml.p3.2xlarge | 12 | 32 | 
| bert-base-cased  | ml.p3.2xlarge | 14 | 24 | 
| bert-base-chinese | ml.p3.2xlarge | 16 | 24 | 
| bert-base-multilingual-cased  | ml.p3.2xlarge | 4 | 16 | 
| bert-base-multilingual-uncased  | ml.p3.2xlarge | 8 | 16 | 
| bert-base-uncased  | ml.p3.2xlarge | 12 | 24 | 
| cl-tohoku/bert-base-japanese-whole-word-masking | ml.p3.2xlarge | 12 | 24 | 
| cl-tohoku/bert-base-japanese  | ml.p3.2xlarge | 12 | 24 | 
| distilbert-base-uncased  | ml.p3.2xlarge | 28 | 32 | 
| distilbert-base-uncased-finetuned-sst-2-english | ml.p3.2xlarge | 28 | 32 | 
| distilgpt2  | ml.p3.2xlarge | 16 | 32 | 
| facebook/bart-base  | ml.p3.2xlarge | 4 | 8 | 
| gpt2 | ml.p3.2xlarge | 6 | 20 | 
| nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large  | ml.p3.2xlarge | 20 | 32 | 
| roberta-base  | ml.p3.2xlarge | 12 | 20 | 


| 單節點，多 GPU | 模型  | 執行個體類型 | 原生批次大小 | Training Compiler 的批次大小 | 
| --- | --- | --- | --- | --- | 
| bert-base-chinese  | ml.p3.8xlarge | 16 | 26 | 
| bert-base-multilingual-cased  | ml.p3.8xlarge | 6 | 16 | 
| bert-base-multilingual-uncased | ml.p3.8xlarge | 6 | 16 | 
| bert-base-uncased  | ml.p3.8xlarge | 14 | 24 | 
| distilbert-base-uncased  | ml.p3.8xlarge | 14 | 32 | 
| distilgpt2 | ml.p3.8xlarge | 6 | 32 | 
| facebook/bart-base | ml.p3.8xlarge | 8 | 16 | 
| gpt2  | ml.p3.8xlarge | 8 | 20 | 
| roberta-base  | ml.p3.8xlarge | 12 | 20 | 

### 轉換器 4.17.0，搭配 TensorFlow 2.6.3
<a name="training-compiler-tested-models-hf417-tf263"></a>

已採用 `Sequence_Len=128` 與自動混合精度 (AMP) 進行測試。


| 模型  | 執行個體類型 | 原生架構的批次大小 | Training Compiler 的批次大小 | 
| --- | --- | --- | --- | 
| albert-base-v2 | ml.g4dn.16xlarge | 136 | 208 | 
| albert-base-v2 | ml.g5.4xlarge | 219 | 312 | 
| albert-base-v2 | ml.p3.2xlarge | 152 | 208 | 
| albert-base-v2 | ml.p3.8xlarge | 152 | 192 | 
| bert-base-uncased | ml.g4dn.16xlarge | 120 | 101 | 
| bert-base-uncased | ml.g5.4xlarge | 184 | 160 | 
| bert-base-uncased | ml.p3.2xlarge | 128 | 108 | 
| bert-large-uncased | ml.g4dn.16xlarge | 37 | 28 | 
| bert-large-uncased | ml.g5.4xlarge | 64 | 55 | 
| bert-large-uncased | ml.p3.2xlarge | 40 | 32 | 
| camembert-base | ml.g4dn.16xlarge | 96 | 100 | 
| camembert-base | ml.g5.4xlarge | 190 | 160 | 
| camembert-base | ml.p3.2xlarge | 129 | 108 | 
| camembert-base | ml.p3.8xlarge | 128 | 104 | 
| distilbert-base-uncased | ml.g4dn.16xlarge | 210 | 160 | 
| distilbert-base-uncased | ml.g5.4xlarge | 327 | 288 | 
| distilbert-base-uncased | ml.p3.2xlarge | 224 | 196 | 
| distilbert-base-uncased | ml.p3.8xlarge | 192 | 182 | 
| google\$1electra-small-discriminator | ml.g4dn.16xlarge | 336 | 288 | 
| google\$1electra-small-discriminator | ml.g5.4xlarge | 504 | 384 | 
| google\$1electra-small-discriminator | ml.p3.2xlarge | 352 | 323 | 
| gpt2 | ml.g4dn.16xlarge | 89 | 64 | 
| gpt2 | ml.g5.4xlarge | 140 | 146 | 
| gpt2 | ml.p3.2xlarge | 94 | 96 | 
| gpt2 | ml.p3.8xlarge | 96 | 88 | 
| jplu\$1tf-xlm-roberta-base | ml.g4dn.16xlarge | 52 | 16 | 
| jplu\$1tf-xlm-roberta-base | ml.g5.4xlarge | 64 | 44 | 
| microsoft\$1mpnet-base | ml.g4dn.16xlarge | 120 | 100 | 
| microsoft\$1mpnet-base | ml.g5.4xlarge | 192 | 160 | 
| microsoft\$1mpnet-base | ml.p3.2xlarge | 128 | 104 | 
| microsoft\$1mpnet-base | ml.p3.8xlarge | 130 | 92 | 
| roberta-base | ml.g4dn.16xlarge | 108 | 64 | 
| roberta-base | ml.g5.4xlarge | 176 | 142 | 
| roberta-base | ml.p3.2xlarge | 118 | 100 | 
| roberta-base | ml.p3.8xlarge | 112 | 88 | 

### 轉換器 4.11.0，搭配 TensorFlow 2.5.1
<a name="training-compiler-tested-models-hf411-tf251"></a>

已採用 `Sequence_Len=128` 與自動混合精度 (AMP) 進行測試。


| 單節點，單 GPU | 模型  | 執行個體類型 | 原生批次大小 | Training Compiler 的批次大小 | 
| --- | --- | --- | --- | --- | 
| albert-base-v2  | ml.p3.2xlarge | 128 | 128 | 
| bart-base  | ml.p3.2xlarge | 12 | 64 | 
| bart-large  | ml.p3.2xlarge | 4 | 28 | 
| bert-base-cased  | ml.p3.2xlarge | 16 | 128 | 
| bert-base-chinese | ml.p3.2xlarge | 16 | 128 | 
| bert-base-multilingual-cased  | ml.p3.2xlarge | 12 | 64 | 
| bert-base-multilingual-uncased  | ml.p3.2xlarge | 16 | 96 | 
| bert-base-uncased | ml.p3.2xlarge | 16 | 96 | 
| bert-large-uncased  | ml.p3.2xlarge | 4 | 24 | 
| cl-tohoku/bert-base-japanese  | ml.p3.2xlarge | 16 | 128 | 
| cl-tohoku/bert-base-japanese-whole-word-masking  | ml.p3.2xlarge | 16 | 128 | 
| distilbert-base-sst2  | ml.p3.2xlarge | 32 | 128 | 
| distilbert-base-uncased  | ml.p3.2xlarge | 32 | 128 | 
| distilgpt2 | ml.p3.2xlarge | 32 | 128 | 
| gpt2  | ml.p3.2xlarge | 12 | 64 | 
| gpt2-large  | ml.p3.2xlarge | 2 | 24 | 
| jplu/tf-xlm-roberta-base  | ml.p3.2xlarge | 12 | 32 | 
| roberta-base  | ml.p3.2xlarge | 4 | 64 | 
| roberta-large  | ml.p3.2xlarge | 4 | 64 | 
| t5-base  | ml.p3.2xlarge | 64 | 64 | 
| t5-small  | ml.p3.2xlarge | 128 | 128 | 