AcceleratorName
- class aws_cdk.aws_ec2.AcceleratorName(*values)
Bases:
Enum
Specific hardware accelerator models supported by EC2.
Defines exact accelerator models that can be required or excluded when selecting instance types.
- ExampleMetadata:
infused
Example:
# infrastructure_role: iam.Role # instance_profile: iam.InstanceProfile # vpc: ec2.Vpc mi_capacity_provider = ecs.ManagedInstancesCapacityProvider(self, "MICapacityProvider", infrastructure_role=infrastructure_role, ec2_instance_profile=instance_profile, subnets=vpc.private_subnets, instance_requirements=ec2.InstanceRequirementsConfig( # Required: CPU and memory constraints v_cpu_count_min=2, v_cpu_count_max=8, memory_min=Size.gibibytes(4), memory_max=Size.gibibytes(32), # CPU preferences cpu_manufacturers=[ec2.CpuManufacturer.INTEL, ec2.CpuManufacturer.AMD], instance_generations=[ec2.InstanceGeneration.CURRENT], # Instance type filtering allowed_instance_types=["m5.*", "c5.*"], # Performance characteristics burstable_performance=ec2.BurstablePerformance.EXCLUDED, bare_metal=ec2.BareMetal.EXCLUDED, # Accelerator requirements (for ML/AI workloads) accelerator_types=[ec2.AcceleratorType.GPU], accelerator_manufacturers=[ec2.AcceleratorManufacturer.NVIDIA], accelerator_names=[ec2.AcceleratorName.T4, ec2.AcceleratorName.V100], accelerator_count_min=1, # Storage requirements local_storage=ec2.LocalStorage.REQUIRED, local_storage_types=[ec2.LocalStorageType.SSD], total_local_storage_gBMin=100, # Network requirements network_interface_count_min=2, network_bandwidth_gbps_min=10, # Cost optimization on_demand_max_price_percentage_over_lowest_price=10 ) )
Attributes
- A100
NVIDIA A100 GPU.
- K80
NVIDIA K80 GPU.
- M60
NVIDIA M60 GPU.
- RADEON_PRO_V520
AMD Radeon Pro V520 GPU.
- T4
NVIDIA T4 GPU.
- V100
NVIDIA V100 GPU.
- VU9P
Xilinx VU9P FPGA.