enum AcceleratorName
| Language | Type name |
|---|---|
.NET | Amazon.CDK.AWS.EC2.AcceleratorName |
Go | github.com/aws/aws-cdk-go/awscdk/v2/awsec2#AcceleratorName |
Java | software.amazon.awscdk.services.ec2.AcceleratorName |
Python | aws_cdk.aws_ec2.AcceleratorName |
TypeScript (source) | aws-cdk-lib » aws_ec2 » AcceleratorName |
Specific hardware accelerator models supported by EC2.
Defines exact accelerator models that can be required or excluded when selecting instance types.
Example
declare const vpc: ec2.Vpc;
const securityGroup = new ec2.SecurityGroup(this, 'SecurityGroup', {
vpc,
description: 'Security group for managed instances',
});
const miCapacityProvider = new ecs.ManagedInstancesCapacityProvider(this, 'MICapacityProvider', {
subnets: vpc.privateSubnets,
securityGroups: [securityGroup],
instanceRequirements: {
// Required: CPU and memory constraints
vCpuCountMin: 2,
vCpuCountMax: 8,
memoryMin: Size.gibibytes(4),
memoryMax: Size.gibibytes(32),
// CPU preferences
cpuManufacturers: [ec2.CpuManufacturer.INTEL, ec2.CpuManufacturer.AMD],
instanceGenerations: [ec2.InstanceGeneration.CURRENT],
// Instance type filtering
allowedInstanceTypes: ['m5.*', 'c5.*'],
// Performance characteristics
burstablePerformance: ec2.BurstablePerformance.EXCLUDED,
bareMetal: ec2.BareMetal.EXCLUDED,
// Accelerator requirements (for ML/AI workloads)
acceleratorTypes: [ec2.AcceleratorType.GPU],
acceleratorManufacturers: [ec2.AcceleratorManufacturer.NVIDIA],
acceleratorNames: [ec2.AcceleratorName.T4, ec2.AcceleratorName.V100],
acceleratorCountMin: 1,
// Storage requirements
localStorage: ec2.LocalStorage.REQUIRED,
localStorageTypes: [ec2.LocalStorageType.SSD],
totalLocalStorageGBMin: 100,
// Network requirements
networkInterfaceCountMin: 2,
networkBandwidthGbpsMin: 10,
// Cost optimization
onDemandMaxPricePercentageOverLowestPrice: 10,
},
});
Members
| Name | Description |
|---|---|
| A100 | NVIDIA A100 GPU. |
| K80 | NVIDIA K80 GPU. |
| M60 | NVIDIA M60 GPU. |
| RADEON_PRO_V520 | AMD Radeon Pro V520 GPU. |
| T4 | NVIDIA T4 GPU. |
| V100 | NVIDIA V100 GPU. |
| VU9P | Xilinx VU9P FPGA. |
| A10G | NVIDIA A10G GPU. |
| H100 | NVIDIA H100 GPU. |
| INFERENTIA | AWS Inferentia chips. |
| K520 | NVIDIA GRID K520 GPU. |
| T4G | NVIDIA T4G GPUs. |
| L40S | NVIDIA L40S GPU for AI inference and graphics workloads. |
| L4 | NVIDIA L4 GPU for AI inference and graphics workloads. |
| GAUDI_HL_205 | Habana Gaudi HL-205 accelerator for deep learning training. |
| INFERENTIA2 | AWS Inferentia2 chips for high-performance ML inference. |
| TRAINIUM | AWS Trainium chips for high-performance ML training. |
| TRAINIUM2 | AWS Trainium2 chips for high-performance ML training. |
| U30 | Xilinx U30 media transcoding accelerator for video processing. |
A100
NVIDIA A100 GPU.
K80
NVIDIA K80 GPU.
M60
NVIDIA M60 GPU.
RADEON_PRO_V520
AMD Radeon Pro V520 GPU.
T4
NVIDIA T4 GPU.
V100
NVIDIA V100 GPU.
VU9P
Xilinx VU9P FPGA.
A10G
NVIDIA A10G GPU.
H100
NVIDIA H100 GPU.
INFERENTIA
AWS Inferentia chips.
K520
NVIDIA GRID K520 GPU.
T4G
NVIDIA T4G GPUs.
L40S
NVIDIA L40S GPU for AI inference and graphics workloads.
L4
NVIDIA L4 GPU for AI inference and graphics workloads.
GAUDI_HL_205
Habana Gaudi HL-205 accelerator for deep learning training.
INFERENTIA2
AWS Inferentia2 chips for high-performance ML inference.
TRAINIUM
AWS Trainium chips for high-performance ML training.
TRAINIUM2
AWS Trainium2 chips for high-performance ML training.
U30
Xilinx U30 media transcoding accelerator for video processing.

.NET
Go
Java
Python
TypeScript (