AcceleratorManufacturer
- class aws_cdk.aws_ec2.AcceleratorManufacturer(*values)
Bases:
EnumSupported hardware accelerator manufacturers.
Restricts instance selection to accelerators from a particular vendor. Useful for choosing specific ecosystems (e.g., NVIDIA CUDA, AWS chips).
- ExampleMetadata:
infused
Example:
# vpc: ec2.Vpc security_group = ec2.SecurityGroup(self, "SecurityGroup", vpc=vpc, description="Security group for managed instances" ) mi_capacity_provider = ecs.ManagedInstancesCapacityProvider(self, "MICapacityProvider", subnets=vpc.private_subnets, security_groups=[security_group], instance_requirements=ec2.InstanceRequirementsConfig( # Required: CPU and memory constraints v_cpu_count_min=2, v_cpu_count_max=8, memory_min=Size.gibibytes(4), memory_max=Size.gibibytes(32), # CPU preferences cpu_manufacturers=[ec2.CpuManufacturer.INTEL, ec2.CpuManufacturer.AMD], instance_generations=[ec2.InstanceGeneration.CURRENT], # Instance type filtering allowed_instance_types=["m5.*", "c5.*"], # Performance characteristics burstable_performance=ec2.BurstablePerformance.EXCLUDED, bare_metal=ec2.BareMetal.EXCLUDED, # Accelerator requirements (for ML/AI workloads) accelerator_types=[ec2.AcceleratorType.GPU], accelerator_manufacturers=[ec2.AcceleratorManufacturer.NVIDIA], accelerator_names=[ec2.AcceleratorName.T4, ec2.AcceleratorName.V100], accelerator_count_min=1, # Storage requirements local_storage=ec2.LocalStorage.REQUIRED, local_storage_types=[ec2.LocalStorageType.SSD], total_local_storage_gBMin=100, # Network requirements network_interface_count_min=2, network_bandwidth_gbps_min=10, # Cost optimization on_demand_max_price_percentage_over_lowest_price=10 ) )
Attributes
- AMD
AMD (e.g., Radeon Pro V520 GPU).
- AWS
Amazon Web Services (e.g., Inferentia, Trainium accelerators).
- HABANA
Habana Labs(e.g, Gaudi accelerator).
- NVIDIA
NVIDIA (e.g., A100, V100, T4, K80, M60 GPUs).
- XILINX
Xilinx (e.g., VU9P FPGA).