Package software.amazon.awscdk.services.eks_v2
Amazon EKS V2 Construct Library
The aws-eks-v2 module is a rewrite of the existing aws-eks module (https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html). This new iteration leverages native L1 CFN resources, replacing the previous custom resource approach for creating EKS clusters and Fargate Profiles.
Compared to the original EKS module, it has the following major changes:
- Use native L1 AWS::EKS::Cluster resource to replace custom resource Custom::AWSCDK-EKS-Cluster
- Use native L1 AWS::EKS::FargateProfile resource to replace custom resource Custom::AWSCDK-EKS-FargateProfile
- Kubectl Handler will not be created by default. It will only be created if users specify it.
- Remove AwsAuth construct. Permissions to the cluster will be managed by Access Entry.
- Remove the limit of 1 cluster per stack
- Remove nested stacks
- API changes to make them more ergonomic.
Quick start
Here is the minimal example of defining an AWS EKS cluster
Cluster cluster = Cluster.Builder.create(this, "hello-eks")
.version(KubernetesVersion.V1_34)
.build();
Architecture
kubectl | |
+------------>| Kubectl Handler |
| | (Optional) |
| +-----------------+
+-------------------------------------+-------------------------------------+
| EKS Cluster (Auto Mode) |
| AWS::EKS::Cluster |
| |
| +---------------------------------------------------------------------+ |
| | Auto Mode Compute (Managed by EKS) (Default) | |
| | | |
| | - Automatically provisions EC2 instances | |
| | - Auto scaling based on pod requirements | |
| | - No manual node group configuration needed | |
| | | |
| +---------------------------------------------------------------------+ |
| |
+---------------------------------------------------------------------------+
In a nutshell:
- Auto Mode (Default) – The fully managed capacity mode in EKS.
EKS automatically provisions and scales EC2 capacity based on pod requirements.
It manages internal system and general-purpose NodePools, handles networking and storage setup, and removes the need for user-managed node groups or Auto Scaling Groups.
Cluster cluster = Cluster.Builder.create(this, "AutoModeCluster") .version(KubernetesVersion.V1_34) .build(); - Managed Node Groups – The semi-managed capacity mode.
EKS provisions and manages EC2 nodes on your behalf but you configure the instance types, scaling ranges, and update strategy.
AWS handles node health, draining, and rolling updates while you retain control over scaling and cost optimization.
You can also define Fargate Profiles that determine which pods or namespaces run on Fargate infrastructure.
Cluster cluster = Cluster.Builder.create(this, "ManagedNodeCluster") .version(KubernetesVersion.V1_34) .defaultCapacityType(DefaultCapacityType.NODEGROUP) .build(); // Add a Fargate Profile for specific workloads (e.g., default namespace) cluster.addFargateProfile("FargateProfile", FargateProfileOptions.builder() .selectors(List.of(Selector.builder().namespace("default").build())) .build()); - Fargate Mode – The Fargate capacity mode.
EKS runs your pods directly on AWS Fargate without provisioning EC2 nodes.
FargateCluster cluster = FargateCluster.Builder.create(this, "FargateCluster") .version(KubernetesVersion.V1_34) .build(); - Self-Managed Nodes – The fully manual capacity mode.
You create and manage EC2 instances (via an Auto Scaling Group) and connect them to the cluster manually.
This provides maximum flexibility for custom AMIs or configurations but also the highest operational overhead.
Cluster cluster = Cluster.Builder.create(this, "SelfManagedCluster") .version(KubernetesVersion.V1_34) .build(); // Add self-managed Auto Scaling Group cluster.addAutoScalingGroupCapacity("self-managed-asg", AutoScalingGroupCapacityOptions.builder() .instanceType(InstanceType.of(InstanceClass.T3, InstanceSize.MEDIUM)) .minCapacity(1) .maxCapacity(5) .build()); - Kubectl Handler (Optional) – A Lambda-backed custom resource created by the AWS CDK to execute
kubectlcommands (likeapplyorpatch) during deployment. Regardless of the capacity mode, this handler may still be created to apply Kubernetes manifests as part of CDK provisioning.
Provisioning cluster
Creating a new cluster is done using the Cluster constructs. The only required property is the kubernetes version.
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.build();
You can also use FargateCluster to provision a cluster that uses only fargate workers.
FargateCluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.build();
Note: Unlike the previous EKS cluster, Kubectl Handler will not
be created by default. It will only be deployed when kubectlProviderOptions
property is used.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Cluster.Builder.create(this, "hello-eks")
.version(KubernetesVersion.V1_34)
.kubectlProviderOptions(KubectlProviderOptions.builder()
.kubectlLayer(new KubectlV34Layer(this, "kubectl"))
.build())
.build();
EKS Auto Mode
Amazon EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself, allowing AWS to set up and manage the infrastructure that enables the smooth operation of your workloads.
Using Auto Mode
While aws-eks uses DefaultCapacityType.NODEGROUP by default, aws-eks-v2 uses DefaultCapacityType.AUTOMODE as the default capacity type.
Auto Mode is enabled by default when creating a new cluster without specifying any capacity-related properties:
// Create EKS cluster with Auto Mode implicitly enabled
Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster")
.version(KubernetesVersion.V1_34)
.build();
You can also explicitly enable Auto Mode using defaultCapacityType:
// Create EKS cluster with Auto Mode explicitly enabled
Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.AUTOMODE)
.build();
Node Pools
When Auto Mode is enabled, the cluster comes with two default node pools:
system: For running system components and add-onsgeneral-purpose: For running your application workloads
These node pools are managed automatically by EKS. You can configure which node pools to enable through the compute property:
Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.AUTOMODE)
.compute(ComputeConfig.builder()
.nodePools(List.of("system", "general-purpose"))
.build())
.build();
For more information, see Create a Node Pool for EKS Auto Mode.
Disabling Default Node Pools
You can disable the default node pools entirely by setting an empty array for nodePools. This is useful when you want to use Auto Mode features but manage your compute resources separately:
Cluster cluster = Cluster.Builder.create(this, "EksAutoCluster")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.AUTOMODE)
.compute(ComputeConfig.builder()
.nodePools(List.of())
.build())
.build();
When node pools are disabled this way, no IAM role will be created for the node pools, preventing deployment failures that would otherwise occur when a role is created without any node pools.
Node Groups as the default capacity type
If you prefer to manage your own node groups instead of using Auto Mode, you can use the traditional node group approach by specifying defaultCapacityType as NODEGROUP:
// Create EKS cluster with traditional managed node group
Cluster cluster = Cluster.Builder.create(this, "EksCluster")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.NODEGROUP)
.defaultCapacity(3) // Number of instances
.defaultCapacityInstance(InstanceType.of(InstanceClass.T3, InstanceSize.LARGE))
.build();
You can also create a cluster with no initial capacity and add node groups later:
Cluster cluster = Cluster.Builder.create(this, "EksCluster")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.NODEGROUP)
.defaultCapacity(0)
.build();
// Add node groups as needed
cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder()
.minSize(1)
.maxSize(3)
.instanceTypes(List.of(InstanceType.of(InstanceClass.T3, InstanceSize.LARGE)))
.build());
Read Managed node groups for more information on how to add node groups to the cluster.
Mixed with Auto Mode and Node Groups
You can combine Auto Mode with traditional node groups for specific workload requirements:
Cluster cluster = Cluster.Builder.create(this, "Cluster")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.AUTOMODE)
.compute(ComputeConfig.builder()
.nodePools(List.of("system", "general-purpose"))
.build())
.build();
// Add specialized node group for specific workloads
cluster.addNodegroupCapacity("specialized-workload", NodegroupOptions.builder()
.minSize(1)
.maxSize(3)
.instanceTypes(List.of(InstanceType.of(InstanceClass.C5, InstanceSize.XLARGE)))
.labels(Map.of(
"workload", "specialized"))
.build());
Important Notes
- Auto Mode and traditional capacity management are mutually exclusive at the default capacity level. You cannot opt in to Auto Mode and specify
defaultCapacityordefaultCapacityInstance. - When Auto Mode is enabled:
- The cluster will automatically manage compute resources
- Node pools cannot be modified, only enabled or disabled
- EKS will handle scaling and management of the node pools
- Auto Mode requires specific IAM permissions. The construct will automatically attach the required managed policies.
Managed node groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. With Amazon EKS managed node groups, you don't need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
For more details visit Amazon EKS Managed Node Groups.
By default, when using DefaultCapacityType.NODEGROUP, this library will allocate a managed node group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money).
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.NODEGROUP)
.build();
At cluster instantiation time, you can customize the number of instances and their type:
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.NODEGROUP)
.defaultCapacity(5)
.defaultCapacityInstance(InstanceType.of(InstanceClass.M5, InstanceSize.SMALL))
.build();
To access the node group that was created on your behalf, you can use cluster.defaultNodegroup.
Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the cluster.addNodegroupCapacity method:
Cluster cluster = Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.defaultCapacityType(DefaultCapacityType.NODEGROUP)
.defaultCapacity(0)
.build();
cluster.addNodegroupCapacity("custom-node-group", NodegroupOptions.builder()
.instanceTypes(List.of(new InstanceType("m5.large")))
.minSize(4)
.diskSize(100)
.build());
Fargate profiles
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate Profiles, which are defined as part of your Amazon EKS cluster.
See Fargate Considerations in the AWS EKS User Guide.
You can add Fargate Profiles to any EKS cluster defined in your CDK app
through the addFargateProfile() method. The following example adds a profile
that will match all pods from the "default" namespace:
Cluster cluster;
cluster.addFargateProfile("MyProfile", FargateProfileOptions.builder()
.selectors(List.of(Selector.builder().namespace("default").build()))
.build());
You can also directly use the FargateProfile construct to create profiles under different scopes:
Cluster cluster;
FargateProfile.Builder.create(this, "MyProfile")
.cluster(cluster)
.selectors(List.of(Selector.builder().namespace("default").build()))
.build();
To create an EKS cluster that only uses Fargate capacity, you can use FargateCluster.
The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to run CoreDNS on Fargate.
FargateCluster cluster = FargateCluster.Builder.create(this, "MyCluster")
.version(KubernetesVersion.V1_34)
.build();
FargateCluster will create a default FargateProfile which can be accessed via the cluster's defaultProfile property. The created profile can also be customized by passing options as with addFargateProfile.
NOTE: Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (minimum version v1.1.4).
Self-managed capacity
Self-managed capacity gives you the most control over your worker nodes by allowing you to create and manage your own EC2 Auto Scaling Groups. This approach provides maximum flexibility for custom AMIs, instance configurations, and scaling policies, but requires more operational overhead.
You can add self-managed capacity to any cluster using the addAutoScalingGroupCapacity method:
Cluster cluster = Cluster.Builder.create(this, "Cluster")
.version(KubernetesVersion.V1_34)
.build();
cluster.addAutoScalingGroupCapacity("self-managed-nodes", AutoScalingGroupCapacityOptions.builder()
.instanceType(InstanceType.of(InstanceClass.T3, InstanceSize.MEDIUM))
.minCapacity(1)
.maxCapacity(10)
.desiredCapacity(3)
.build());
You can specify custom subnets for the Auto Scaling Group:
Vpc vpc;
Cluster cluster;
cluster.addAutoScalingGroupCapacity("custom-subnet-nodes", AutoScalingGroupCapacityOptions.builder()
.vpcSubnets(SubnetSelection.builder().subnets(vpc.getPrivateSubnets()).build())
.instanceType(InstanceType.of(InstanceClass.T3, InstanceSize.MEDIUM))
.minCapacity(2)
.build());
Endpoint Access
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl)
You can configure the cluster endpoint access by using the endpointAccess property:
Cluster cluster = Cluster.Builder.create(this, "hello-eks")
.version(KubernetesVersion.V1_34)
.endpointAccess(EndpointAccess.PRIVATE)
.build();
The default value is eks.EndpointAccess.PUBLIC_AND_PRIVATE. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and kubectl commands issued by this library stay within your VPC.
Alb Controller
Some Kubernetes resources are commonly implemented on AWS with the help of the ALB Controller.
From the docs:
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
- It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
- It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
To deploy the controller on your EKS cluster, configure the albController property:
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.albController(AlbControllerOptions.builder()
.version(AlbControllerVersion.V2_8_2)
.build())
.build();
To provide additional Helm chart values supported by albController in CDK, use the additionalHelmChartValues property. For example, the following code snippet shows how to set the enableWafV2 flag:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.albController(AlbControllerOptions.builder()
.version(AlbControllerVersion.V2_8_2)
.additionalHelmChartValues(Map.of(
"enableWafv2", false))
.build())
.build();
To overwrite an existing ALB controller service account, use the overwriteServiceAccount property:
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.albController(AlbControllerOptions.builder()
.version(AlbControllerVersion.V2_8_2)
.overwriteServiceAccount(true)
.build())
.build();
The albController requires defaultCapacity or at least one nodegroup. If there's no defaultCapacity or available
nodegroup for the cluster, the albController deployment would fail.
Querying the controller pods should look something like this:
❯ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-76bd6c7586-d929p 1/1 Running 0 109m aws-load-balancer-controller-76bd6c7586-fqxph 1/1 Running 0 109m ... ...
Every Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller. If the controller is deleted before the manifest, it might result in dangling ELB/ALB resources. Currently, the EKS construct library does not detect such dependencies, and they should be done explicitly.
For example:
Cluster cluster;
KubernetesManifest manifest = cluster.addManifest("manifest", Map.of());
if (cluster.getAlbController()) {
manifest.node.addDependency(cluster.getAlbController());
}
You can specify the VPC of the cluster using the vpc and vpcSubnets properties:
Vpc vpc;
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.vpc(vpc)
.vpcSubnets(List.of(SubnetSelection.builder().subnetType(SubnetType.PRIVATE_WITH_EGRESS).build()))
.build();
If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).
Please note that the vpcSubnets property defines the subnets where EKS will place the control plane ENIs. To choose
the subnets where EKS will place the worker nodes, please refer to the Provisioning clusters section above.
If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:
Vpc vpc;
Cluster cluster;
cluster.addAutoScalingGroupCapacity("nodes", AutoScalingGroupCapacityOptions.builder()
.vpcSubnets(SubnetSelection.builder().subnets(vpc.getPrivateSubnets()).build())
.instanceType(new InstanceType("t2.medium"))
.build());
There is an additional components you might want to provision within the VPC.
The KubectlHandler is a Lambda function responsible to issuing kubectl and helm commands against the cluster when you add resource manifests to the cluster.
The handler association to the VPC is derived from the endpointAccess configuration. The rule of thumb is: If the cluster VPC can be associated, it will be.
Breaking this down, it means that if the endpoint exposes private access (via EndpointAccess.PRIVATE or EndpointAccess.PUBLIC_AND_PRIVATE), and the VPC contains private subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.
If the endpoint does not expose private access (via EndpointAccess.PUBLIC) or the VPC does not contain private subnets, the function will not be provisioned within the VPC.
If your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as kubectlLambdaRole) of the EKS Cluster construct.
Kubectl Support
You can choose to have CDK create a Kubectl Handler - a Python Lambda Function to
apply k8s manifests using kubectl apply. This handler will not be created by default.
To create a Kubectl Handler, use kubectlProviderOptions when creating the cluster.
kubectlLayer is the only required property in kubectlProviderOptions.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Cluster.Builder.create(this, "hello-eks")
.version(KubernetesVersion.V1_34)
.kubectlProviderOptions(KubectlProviderOptions.builder()
.kubectlLayer(new KubectlV34Layer(this, "kubectl"))
.build())
.build();
Kubectl Handler created along with the cluster will be granted admin permissions to the cluster.
If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
IRole handlerRole = Role.fromRoleArn(this, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role");
// get the serivceToken from the custom resource provider
String functionArn = Function.fromFunctionName(this, "ProviderOnEventFunc", "ProviderframeworkonEvent-XXX").getFunctionArn();
IKubectlProvider kubectlProvider = KubectlProvider.fromKubectlProviderAttributes(this, "KubectlProvider", KubectlProviderAttributes.builder()
.serviceToken(functionArn)
.role(handlerRole)
.build());
ICluster cluster = Cluster.fromClusterAttributes(this, "Cluster", ClusterAttributes.builder()
.clusterName("cluster")
.kubectlProvider(kubectlProvider)
.build());
Environment
You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Cluster cluster = Cluster.Builder.create(this, "hello-eks")
.version(KubernetesVersion.V1_34)
.kubectlProviderOptions(KubectlProviderOptions.builder()
.kubectlLayer(new KubectlV34Layer(this, "kubectl"))
.environment(Map.of(
"http_proxy", "http://proxy.myproxy.com"))
.build())
.build();
Runtime
The kubectl handler uses kubectl, helm and the aws CLI in order to
interact with the cluster. These are bundled into AWS Lambda layers included in
the @aws-cdk/lambda-layer-awscli and @aws-cdk/lambda-layer-kubectl modules.
The version of kubectl used must be compatible with the Kubernetes version of the
cluster. kubectl is supported within one minor version (older or newer) of Kubernetes
(see Kubernetes version skew policy).
Depending on which version of kubernetes you're targeting, you will need to use one of
the @aws-cdk/lambda-layer-kubectl-vXY packages.
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Cluster cluster = Cluster.Builder.create(this, "hello-eks")
.version(KubernetesVersion.V1_34)
.kubectlProviderOptions(KubectlProviderOptions.builder()
.kubectlLayer(new KubectlV34Layer(this, "kubectl"))
.build())
.build();
Memory
By default, the kubectl provider is configured with 1024MiB of memory. You can use the memory option to specify the memory size for the AWS Lambda function:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Cluster.Builder.create(this, "MyCluster")
.kubectlProviderOptions(KubectlProviderOptions.builder()
.kubectlLayer(new KubectlV34Layer(this, "kubectl"))
.memory(Size.gibibytes(4))
.build())
.version(KubernetesVersion.V1_34)
.build();
ARM64 Support
Instance types with ARM64 architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 instanceType (such as m6g.medium), and the latest
Amazon Linux 2 AMI for ARM64 will be automatically selected.
Cluster cluster;
// add a managed ARM64 nodegroup
cluster.addNodegroupCapacity("extra-ng-arm", NodegroupOptions.builder()
.instanceTypes(List.of(new InstanceType("m6g.medium")))
.minSize(2)
.build());
// add a self-managed ARM64 nodegroup
cluster.addAutoScalingGroupCapacity("self-ng-arm", AutoScalingGroupCapacityOptions.builder()
.instanceType(new InstanceType("m6g.medium"))
.minCapacity(2)
.build());
Masters Role
When you create a cluster, you can specify a mastersRole. The Cluster construct will associate this role with AmazonEKSClusterAdminPolicy through Access Entry.
Role role;
Cluster.Builder.create(this, "HelloEKS")
.version(KubernetesVersion.V1_34)
.mastersRole(role)
.build();
If you do not specify it, you won't have access to the cluster from outside of the CDK application.
Encryption
When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled. The documentation on creating a cluster can provide more details about the customer master key (CMK) that can be used for the encryption.
You can use the secretsEncryptionKey to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.
This setting can only be specified when the cluster is created and cannot be updated.
Key secretsKey = new Key(this, "SecretsKey");
Cluster cluster = Cluster.Builder.create(this, "MyCluster")
.secretsEncryptionKey(secretsKey)
.version(KubernetesVersion.V1_34)
.build();
You can also use a similar configuration for running a cluster built using the FargateCluster construct.
Key secretsKey = new Key(this, "SecretsKey");
FargateCluster cluster = FargateCluster.Builder.create(this, "MyFargateCluster")
.secretsEncryptionKey(secretsKey)
.version(KubernetesVersion.V1_34)
.build();
The Amazon Resource Name (ARN) for that CMK can be retrieved.
Cluster cluster; String clusterEncryptionConfigKeyArn = cluster.getClusterEncryptionConfigKeyArn();
Hybrid Nodes
When you create an Amazon EKS cluster, you can configure it to leverage the EKS Hybrid Nodes feature, allowing you to use your on-premises and edge infrastructure as nodes in your EKS cluster. Refer to the Hyrid Nodes networking documentation to configure your on-premises network, node and pod CIDRs, access control, etc before creating your EKS Cluster.
Once you have identified the on-premises node and pod (optional) CIDRs you will use for your hybrid nodes and the workloads running on them, you can specify them during cluster creation using the remoteNodeNetworks and remotePodNetworks (optional) properties:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Cluster.Builder.create(this, "Cluster")
.version(KubernetesVersion.V1_34)
.remoteNodeNetworks(List.of(RemoteNodeNetwork.builder()
.cidrs(List.of("10.0.0.0/16"))
.build()))
.remotePodNetworks(List.of(RemotePodNetwork.builder()
.cidrs(List.of("192.168.0.0/16"))
.build()))
.build();
Self-Managed Add-ons
Amazon EKS automatically installs self-managed add-ons such as the Amazon VPC CNI plugin for Kubernetes, kube-proxy, and CoreDNS for every cluster. You can change the default configuration of the add-ons and update them when desired. If you wish to create a cluster without the default add-ons, set bootstrapSelfManagedAddons as false. When this is set to false, make sure to install the necessary alternatives which provide functionality that enables pod and service operations for your EKS cluster.
Changing the value of
bootstrapSelfManagedAddonsafter the EKS cluster creation will result in a replacement of the cluster.
Permissions and Security
In the new EKS module, ConfigMap is deprecated. Clusters created by the new module will use API as authentication mode. Access Entry will be the only way for granting permissions to specific IAM users and roles.
Access Entry
An access entry is a cluster identity—directly linked to an AWS IAM principal user or role that is used to authenticate to an Amazon EKS cluster. An Amazon EKS access policy authorizes an access entry to perform specific cluster actions.
Access policies are Amazon EKS-specific policies that assign Kubernetes permissions to access entries. Amazon EKS supports only predefined and AWS managed policies. Access policies are not AWS IAM entities and are defined and managed by Amazon EKS. Amazon EKS access policies include permission sets that support common use cases of administration, editing, or read-only access to Kubernetes resources. See Access Policy Permissions for more details.
Use AccessPolicy to include predefined AWS managed policies:
// AmazonEKSClusterAdminPolicy with `cluster` scope
AccessPolicy.fromAccessPolicyName("AmazonEKSClusterAdminPolicy", AccessPolicyNameOptions.builder()
.accessScopeType(AccessScopeType.CLUSTER)
.build());
// AmazonEKSAdminPolicy with `namespace` scope
AccessPolicy.fromAccessPolicyName("AmazonEKSAdminPolicy", AccessPolicyNameOptions.builder()
.accessScopeType(AccessScopeType.NAMESPACE)
.namespaces(List.of("foo", "bar"))
.build());
Use grantAccess() to grant the AccessPolicy to an IAM principal:
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
Vpc vpc;
Role clusterAdminRole = Role.Builder.create(this, "ClusterAdminRole")
.assumedBy(new ArnPrincipal("arn_for_trusted_principal"))
.build();
Role eksAdminRole = Role.Builder.create(this, "EKSAdminRole")
.assumedBy(new ArnPrincipal("arn_for_trusted_principal"))
.build();
Cluster cluster = Cluster.Builder.create(this, "Cluster")
.vpc(vpc)
.mastersRole(clusterAdminRole)
.version(KubernetesVersion.V1_34)
.kubectlProviderOptions(KubectlProviderOptions.builder()
.kubectlLayer(new KubectlV34Layer(this, "kubectl"))
.memory(Size.gibibytes(4))
.build())
.build();
// Cluster Admin role for this cluster
cluster.grantAccess("clusterAdminAccess", clusterAdminRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSClusterAdminPolicy", AccessPolicyNameOptions.builder()
.accessScopeType(AccessScopeType.CLUSTER)
.build())));
// EKS Admin role for specified namespaces of this cluster
cluster.grantAccess("eksAdminRoleAccess", eksAdminRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSAdminPolicy", AccessPolicyNameOptions.builder()
.accessScopeType(AccessScopeType.NAMESPACE)
.namespaces(List.of("foo", "bar"))
.build())));
Access Entry Types
You can optionally specify an access entry type when granting access. This is particularly useful for EKS Auto Mode clusters with custom node roles, which require the EC2 type:
Cluster cluster;
Role nodeRole;
// Grant access with EC2 type for Auto Mode node role
cluster.grantAccess("nodeAccess", nodeRole.getRoleArn(), List.of(AccessPolicy.fromAccessPolicyName("AmazonEKSAutoNodePolicy", AccessPolicyNameOptions.builder()
.accessScopeType(AccessScopeType.CLUSTER)
.build())), GrantAccessOptions.builder().accessEntryType(AccessEntryType.EC2).build());
The following access entry types are supported:
STANDARD- Default type for standard IAM principals (default when not specified)FARGATE_LINUX- For Fargate profilesEC2_LINUX- For EC2 Linux worker nodesEC2_WINDOWS- For EC2 Windows worker nodesEC2- For EKS Auto Mode node rolesHYBRID_LINUX- For EKS Hybrid NodesHYPERPOD_LINUX- For Amazon SageMaker HyperPod
Note: Access entries with type EC2, HYBRID_LINUX, or HYPERPOD_LINUX cannot have access policies attached per AWS EKS API constraints. For these types, use the AccessEntry construct directly with an empty access policies array.
By default, the cluster creator role will be granted the cluster admin permissions. You can disable it by setting
bootstrapClusterCreatorAdminPermissions to false.
Note - Switching
bootstrapClusterCreatorAdminPermissionson an existing cluster would cause cluster replacement and should be avoided in production.
Service Accounts
With services account you can provide Kubernetes Pods access to AWS resources.
import software.amazon.awscdk.services.s3.*;
Cluster cluster;
// add service account
ServiceAccount serviceAccount = cluster.addServiceAccount("MyServiceAccount");
Bucket bucket = new Bucket(this, "Bucket");
bucket.grantReadWrite(serviceAccount);
KubernetesManifest mypod = cluster.addManifest("mypod", Map.of(
"apiVersion", "v1",
"kind", "Pod",
"metadata", Map.of("name", "mypod"),
"spec", Map.of(
"serviceAccountName", serviceAccount.getServiceAccountName(),
"containers", List.of(Map.of(
"name", "hello",
"image", "paulbouwer/hello-kubernetes:1.5",
"ports", List.of(Map.of("containerPort", 8080)))))));
// create the resource after the service account.
mypod.node.addDependency(serviceAccount);
// print the IAM role arn for this service account
// print the IAM role arn for this service account
CfnOutput.Builder.create(this, "ServiceAccountIamRole").value(serviceAccount.getRole().getRoleArn()).build();
Note that using serviceAccount.serviceAccountName above does not translate into a resource dependency.
This is why an explicit dependency is needed. See https://github.com/aws/aws-cdk/issues/9910 for more details.
It is possible to pass annotations and labels to the service account.
Cluster cluster;
// add service account with annotations and labels
ServiceAccount serviceAccount = cluster.addServiceAccount("MyServiceAccount", ServiceAccountOptions.builder()
.annotations(Map.of(
"eks.amazonaws.com/sts-regional-endpoints", "false"))
.labels(Map.of(
"some-label", "with-some-value"))
.build());
You can also add service accounts to existing clusters.
To do so, pass the openIdConnectProvider property when you import the cluster into the application.
import software.amazon.awscdk.services.s3.*;
// or create a new one using an existing issuer url
String issuerUrl;
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v34.KubectlV34Layer;
// you can import an existing provider
IOidcProvider provider = OidcProviderNative.fromOidcProviderArn(this, "Provider", "arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC");
OidcProviderNative provider2 = OidcProviderNative.Builder.create(this, "Provider")
.url(issuerUrl)
.build();
ICluster cluster = Cluster.fromClusterAttributes(this, "MyCluster", ClusterAttributes.builder()
.clusterName("Cluster")
.openIdConnectProvider(provider)
.kubectlProviderOptions(KubectlProviderOptions.builder()
.kubectlLayer(new KubectlV34Layer(this, "kubectl"))
.build())
.build());
ServiceAccount serviceAccount = cluster.addServiceAccount("MyServiceAccount");
Bucket bucket = new Bucket(this, "Bucket");
bucket.grantReadWrite(serviceAccount);
Note that adding service accounts requires running kubectl commands against the cluster which requires you to provide kubectlProviderOptions in the cluster props to create the kubectl provider. See Kubectl Support
Migrating from the deprecated eks.OpenIdConnectProvider to eks.OidcProviderNative
eks.OpenIdConnectProvider creates an IAM OIDC (OpenId Connect) provider using a custom resource while eks.OidcProviderNative uses the CFN L1 (AWS::IAM::OidcProvider) to create the provider. It is recommended for new and existing projects to use eks.OidcProviderNative.
To migrate without temporarily removing the OIDCProvider, follow these steps:
- Set the
removalPolicyofcluster.openIdConnectProvidertoRETAIN.import software.amazon.awscdk.*; Cluster cluster; RemovalPolicies.of(cluster.openIdConnectProvider).apply(RemovalPolicy.RETAIN);
- Run
cdk diffto verify the changes are expected thencdk deploy. - Add the following to the
contextfield of yourcdk.jsonto enable the feature flag that creates the native oidc provider."@aws-cdk/aws-eks:useNativeOidcProvider": true,
- Run
cdk diffand ensure the changes are expected. Example of an expected diff:Resources [-] Custom::AWSCDKOpenIdConnectProvider TestCluster/OpenIdConnectProvider/Resource TestClusterOpenIdConnectProviderE18F0FD0 orphan [-] AWS::IAM::Role Custom::AWSCDKOpenIdConnectProviderCustomResourceProvider/Role CustomAWSCDKOpenIdConnectProviderCustomResourceProviderRole517FED65 destroy [-] AWS::Lambda::Function Custom::AWSCDKOpenIdConnectProviderCustomResourceProvider/Handler CustomAWSCDKOpenIdConnectProviderCustomResourceProviderHandlerF2C543E0 destroy [+] AWS::IAM::OIDCProvider TestCluster/OidcProviderNative TestClusterOidcProviderNative0BE3F155
- Run
cdk import --forceand provide the ARN of the existing OpenIdConnectProvider when prompted. You will get a warning about pending changes to existing resources which is expected. - Run
cdk deployto apply any pending changes. This will apply the destroy/orphan changes in the above example.
If you are creating the OpenIdConnectProvider manually via new eks.OpenIdConnectProvider, follow these steps:
- Set the
removalPolicyof the existingOpenIdConnectProvidertoRemovalPolicy.RETAIN.import software.amazon.awscdk.*; // Step 1: Add retain policy to existing provider OpenIdConnectProvider existingProvider = OpenIdConnectProvider.Builder.create(this, "Provider") .url("https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLE") .removalPolicy(RemovalPolicy.RETAIN) .build(); - Deploy with the retain policy to avoid deletion of the underlying resource.
cdk deploy
- Replace
OpenIdConnectProviderwithOidcProviderNativein your code.// Step 3: Replace with native provider OidcProviderNative nativeProvider = OidcProviderNative.Builder.create(this, "Provider") .url("https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLE") .build(); - Run
cdk diffand verify the changes are expected. Example of an expected diff:Resources [-] Custom::AWSCDKOpenIdConnectProvider TestCluster/OpenIdConnectProvider/Resource TestClusterOpenIdConnectProviderE18F0FD0 orphan [-] AWS::IAM::Role Custom::AWSCDKOpenIdConnectProviderCustomResourceProvider/Role CustomAWSCDKOpenIdConnectProviderCustomResourceProviderRole517FED65 destroy [-] AWS::Lambda::Function Custom::AWSCDKOpenIdConnectProviderCustomResourceProvider/Handler CustomAWSCDKOpenIdConnectProviderCustomResourceProviderHandlerF2C543E0 destroy [+] AWS::IAM::OIDCProvider TestCluster/OidcProviderNative TestClusterOidcProviderNative0BE3F155
- Run
cdk import --forceto import the existing OIDC provider resource by providing the existing ARN. - Run
cdk deployto apply any pending changes. This will apply the destroy/orphan operations in the example diff above.
Cluster Security Group
When you create an Amazon EKS cluster, a cluster security group is automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely between each other.
The ID for that security group can be retrieved after creating the cluster.
Cluster cluster; String clusterSecurityGroupId = cluster.getClusterSecurityGroupId();
Applying Kubernetes Resources
To apply kubernetes resource, kubectl provider needs to be created for the cluster. You can use kubectlProviderOptions to create the kubectl Provider.
The library supports several popular resource deployment mechanisms, among which are:
Kubernetes Manifests
The KubernetesManifest construct or cluster.addManifest method can be used
to apply Kubernetes resource manifests to this cluster.
When using
cluster.addManifest, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly usenew KubernetesManifestto create the manifest in the scope of the other stack.
The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:
Cluster cluster;
Map<String, String> appLabel = Map.of("app", "hello-kubernetes");
Map<String, Object> deployment = Map.of(
"apiVersion", "apps/v1",
"kind", "Deployment",
"metadata", Map.of("name", "hello-kubernetes"),
"spec", Map.of(
"replicas", 3,
"selector", Map.of("matchLabels", appLabel),
"template", Map.of(
"metadata", Map.of("labels", appLabel),
"spec", Map.of(
"containers", List.of(Map.of(
"name", "hello-kubernetes",
"image", "paulbouwer/hello-kubernetes:1.5",
"ports", List.of(Map.of("containerPort", 8080))))))));
Map<String, Object> service = Map.of(
"apiVersion", "v1",
"kind", "Service",
"metadata", Map.of("name", "hello-kubernetes"),
"spec", Map.of(
"type", "LoadBalancer",
"ports", List.of(Map.of("port", 80, "targetPort", 8080)),
"selector", appLabel));
// option 1: use a construct
// option 1: use a construct
KubernetesManifest.Builder.create(this, "hello-kub")
.cluster(cluster)
.manifest(List.of(deployment, service))
.build();
// or, option2: use `addManifest`
cluster.addManifest("hello-kub", service, deployment);
ALB Controller Integration
The KubernetesManifest construct can detect ingress resources inside your manifest and automatically add the necessary annotations
so they are picked up by the ALB Controller.
See Alb Controller
To that end, it offers the following properties:
ingressAlb- Signal that the ingress detection should be done.ingressAlbScheme- Which ALB scheme should be applied. Defaults tointernal.
Adding resources from a URL
The following example will deploy the resource manifest hosting on remote server:
// This example is only available in TypeScript
import * as yaml from 'js-yaml';
import * as request from 'sync-request';
declare const cluster: eks.Cluster;
const manifestUrl = 'https://url/of/manifest.yaml';
const manifest = yaml.safeLoadAll(request('GET', manifestUrl).getBody());
cluster.addManifest('my-resource', manifest);
Dependencies
There are cases where Kubernetes resources must be deployed in a specific order. For example, you cannot define a resource in a Kubernetes namespace before the namespace was created.
You can represent dependencies between KubernetesManifests using
resource.node.addDependency():
Cluster cluster;
KubernetesManifest namespace = cluster.addManifest("my-namespace", Map.of(
"apiVersion", "v1",
"kind", "Namespace",
"metadata", Map.of("name", "my-app")));
KubernetesManifest service = cluster.addManifest("my-service", Map.of(
"metadata", Map.of(
"name", "myservice",
"namespace", "my-app"),
"spec", Map.of()));
service.node.addDependency(namespace);
NOTE: when a KubernetesManifest includes multiple resources (either directly
or through cluster.addManifest()) (e.g. cluster.addManifest('foo', r1, r2, r3,...)), these resources will be applied as a single manifest via kubectl
and will be applied sequentially (the standard behavior in kubectl).
Since Kubernetes manifests are implemented as CloudFormation resources in the
CDK. This means that if the manifest is deleted from your code (or the stack is
deleted), the next cdk deploy will issue a kubectl delete command and the
Kubernetes resources in that manifest will be deleted.
Resource Pruning
When a resource is deleted from a Kubernetes manifest, the EKS module will
automatically delete these resources by injecting a prune label to all
manifest resources. This label is then passed to kubectl apply --prune.
Pruning is enabled by default but can be disabled through the prune option
when a cluster is defined:
Cluster.Builder.create(this, "MyCluster")
.version(KubernetesVersion.V1_34)
.prune(false)
.build();
Manifests Validation
The kubectl CLI supports applying a manifest by skipping the validation.
This can be accomplished by setting the skipValidation flag to true in the KubernetesManifest props.
Cluster cluster;
KubernetesManifest.Builder.create(this, "HelloAppWithoutValidation")
.cluster(cluster)
.manifest(List.of(Map.of("foo", "bar")))
.skipValidation(true)
.build();
Helm Charts
The HelmChart construct or cluster.addHelmChart method can be used
to add Kubernetes resources to this cluster using Helm.
When using
cluster.addHelmChart, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly usenew HelmChartto create the chart in the scope of the other stack.
The following example will install the NGINX Ingress Controller to your cluster using Helm.
Cluster cluster;
// option 1: use a construct
// option 1: use a construct
HelmChart.Builder.create(this, "NginxIngress")
.cluster(cluster)
.chart("nginx-ingress")
.repository("https://helm.nginx.com/stable")
.namespace("kube-system")
.build();
// or, option2: use `addHelmChart`
cluster.addHelmChart("NginxIngress", HelmChartOptions.builder()
.chart("nginx-ingress")
.repository("https://helm.nginx.com/stable")
.namespace("kube-system")
.build());
Helm charts will be installed and updated using helm upgrade --install, where a few parameters
are being passed down (such as repo, values, version, namespace, wait, timeout, etc).
This means that if the chart is added to CDK with the same release name, it will try to update
the chart in the cluster.
Additionally, the chartAsset property can be an aws-s3-assets.Asset. This allows the use of local, private helm charts.
import software.amazon.awscdk.services.s3.assets.*;
Cluster cluster;
Asset chartAsset = Asset.Builder.create(this, "ChartAsset")
.path("/path/to/asset")
.build();
cluster.addHelmChart("test-chart", HelmChartOptions.builder()
.chartAsset(chartAsset)
.build());
Nested values passed to the values parameter should be provided as a nested dictionary:
Cluster cluster;
cluster.addHelmChart("ExternalSecretsOperator", HelmChartOptions.builder()
.chart("external-secrets")
.release("external-secrets")
.repository("https://charts.external-secrets.io")
.namespace("external-secrets")
.values(Map.of(
"installCRDs", true,
"webhook", Map.of(
"port", 9443)))
.build());
Helm chart can come with Custom Resource Definitions (CRDs) defined that by default will be installed by helm as well. However in special cases it might be needed to skip the installation of CRDs, for that the property skipCrds can be used.
Cluster cluster;
// option 1: use a construct
// option 1: use a construct
HelmChart.Builder.create(this, "NginxIngress")
.cluster(cluster)
.chart("nginx-ingress")
.repository("https://helm.nginx.com/stable")
.namespace("kube-system")
.skipCrds(true)
.build();
OCI Charts
OCI charts are also supported.
Also replace the ${VARS} with appropriate values.
Cluster cluster;
// option 1: use a construct
// option 1: use a construct
HelmChart.Builder.create(this, "MyOCIChart")
.cluster(cluster)
.chart("some-chart")
.repository("oci://${ACCOUNT_ID}.dkr.ecr.${ACCOUNT_REGION}.amazonaws.com/${REPO_NAME}")
.namespace("oci")
.version("0.0.1")
.build();
Helm charts are implemented as CloudFormation resources in CDK.
This means that if the chart is deleted from your code (or the stack is
deleted), the next cdk deploy will issue a helm uninstall command and the
Helm chart will be deleted.
When there is no release defined, a unique ID will be allocated for the release based
on the construct path.
By default, all Helm charts will be installed concurrently. In some cases, this
could cause race conditions where two Helm charts attempt to deploy the same
resource or if Helm charts depend on each other. You can use
chart.node.addDependency() in order to declare a dependency order between
charts:
Cluster cluster;
HelmChart chart1 = cluster.addHelmChart("MyChart", HelmChartOptions.builder()
.chart("foo")
.build());
HelmChart chart2 = cluster.addHelmChart("MyChart", HelmChartOptions.builder()
.chart("bar")
.build());
chart2.node.addDependency(chart1);
Custom CDK8s Constructs
You can also compose a few stock cdk8s+ constructs into your own custom construct. However, since mixing scopes between aws-cdk and cdk8s is currently not supported, the Construct class
you'll need to use is the one from the constructs module, and not from aws-cdk-lib like you normally would.
This is why we used new cdk8s.App() as the scope of the chart above.
import software.constructs.*;
import org.cdk8s.*;
import org.cdk8s.plus25.*;
public class LoadBalancedWebService {
private Number port;
public Number getPort() {
return this.port;
}
public LoadBalancedWebService port(Number port) {
this.port = port;
return this;
}
private String image;
public String getImage() {
return this.image;
}
public LoadBalancedWebService image(String image) {
this.image = image;
return this;
}
private Number replicas;
public Number getReplicas() {
return this.replicas;
}
public LoadBalancedWebService replicas(Number replicas) {
this.replicas = replicas;
return this;
}
}
App app = new App();
Chart chart = new Chart(app, "my-chart");
public class LoadBalancedWebService extends Construct {
public LoadBalancedWebService(Construct scope, String id, LoadBalancedWebService props) {
super(scope, id);
Deployment deployment = Deployment.Builder.create(chart, "Deployment")
.replicas(props.getReplicas())
.containers(List.of(Container.Builder.create().image(props.getImage()).build()))
.build();
deployment.exposeViaService(DeploymentExposeViaServiceOptions.builder()
.ports(List.of(ServicePort.builder().port(props.getPort()).build()))
.serviceType(ServiceType.LOAD_BALANCER)
.build());
}
}
Manually importing k8s specs and CRD's
If you find yourself unable to use cdk8s+, or just like to directly use the k8s native objects or CRD's, you can do so by manually importing them using the cdk8s-cli.
See Importing kubernetes objects for detailed instructions.
Patching Kubernetes Resources
The KubernetesPatch construct can be used to update existing kubernetes
resources. The following example can be used to patch the hello-kubernetes
deployment from the example above with 5 replicas.
Cluster cluster;
KubernetesPatch.Builder.create(this, "hello-kub-deployment-label")
.cluster(cluster)
.resourceName("deployment/hello-kubernetes")
.applyPatch(Map.of("spec", Map.of("replicas", 5)))
.restorePatch(Map.of("spec", Map.of("replicas", 3)))
.build();
Querying Kubernetes Resources
The KubernetesObjectValue construct can be used to query for information about kubernetes objects,
and use that as part of your CDK application.
For example, you can fetch the address of a LoadBalancer type service:
Cluster cluster;
// query the load balancer address
KubernetesObjectValue myServiceAddress = KubernetesObjectValue.Builder.create(this, "LoadBalancerAttribute")
.cluster(cluster)
.objectType("service")
.objectName("my-service")
.jsonPath(".status.loadBalancer.ingress[0].hostname")
.build();
// pass the address to a lambda function
Function proxyFunction = Function.Builder.create(this, "ProxyFunction")
.handler("index.handler")
.code(Code.fromInline("my-code"))
.runtime(Runtime.NODEJS_LATEST)
.environment(Map.of(
"myServiceAddress", myServiceAddress.getValue()))
.build();
Specifically, since the above use-case is quite common, there is an easier way to access that information:
Cluster cluster;
String loadBalancerAddress = cluster.getServiceLoadBalancerAddress("my-service");
Add-ons
Add-ons is a software that provides supporting operational capabilities to Kubernetes applications. The EKS module supports adding add-ons to your cluster using the eks.Addon class.
Cluster cluster;
Addon.Builder.create(this, "Addon")
.cluster(cluster)
.addonName("coredns")
.addonVersion("v1.11.4-eksbuild.2")
// whether to preserve the add-on software on your cluster but Amazon EKS stops managing any settings for the add-on.
.preserveOnDelete(false)
.configurationValues(Map.of(
"replicaCount", 2))
.build();
Using existing clusters
The EKS library allows defining Kubernetes resources such as Kubernetes manifests and Helm charts on clusters that are not defined as part of your CDK app.
First you will need to import the kubectl provider and cluster created in another stack
IRole handlerRole = Role.fromRoleArn(this, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role");
IKubectlProvider kubectlProvider = KubectlProvider.fromKubectlProviderAttributes(this, "KubectlProvider", KubectlProviderAttributes.builder()
.serviceToken("arn:aws:lambda:us-east-2:123456789012:function:my-function:1")
.role(handlerRole)
.build());
ICluster cluster = Cluster.fromClusterAttributes(this, "Cluster", ClusterAttributes.builder()
.clusterName("cluster")
.kubectlProvider(kubectlProvider)
.build());
Then, you can use addManifest or addHelmChart to define resources inside
your Kubernetes cluster.
Cluster cluster;
cluster.addManifest("Test", Map.of(
"apiVersion", "v1",
"kind", "ConfigMap",
"metadata", Map.of(
"name", "myconfigmap"),
"data", Map.of(
"Key", "value",
"Another", "123454")));
Logging
EKS supports cluster logging for 5 different types of events:
- API requests to the cluster.
- Cluster access via the Kubernetes API.
- Authentication requests into the cluster.
- State of cluster controllers.
- Scheduling decisions.
You can enable logging for each one separately using the clusterLogging
property. For example:
Cluster cluster = Cluster.Builder.create(this, "Cluster")
// ...
.version(KubernetesVersion.V1_34)
.clusterLogging(List.of(ClusterLoggingTypes.API, ClusterLoggingTypes.AUTHENTICATOR, ClusterLoggingTypes.SCHEDULER))
.build();
NodeGroup Repair Config
You can enable Managed Node Group auto-repair config using enableNodeAutoRepair
property. For example:
Cluster cluster;
cluster.addNodegroupCapacity("NodeGroup", NodegroupOptions.builder()
.enableNodeAutoRepair(true)
.build());
-
ClassDescriptionRepresents an access entry in an Amazon EKS cluster.A fluent builder for
AccessEntry.Represents the attributes of an access entry.A builder forAccessEntryAttributesAn implementation forAccessEntryAttributesRepresents the properties required to create an Amazon EKS access entry.A builder forAccessEntryPropsAn implementation forAccessEntryPropsRepresents the different types of access entries that can be used in an Amazon EKS cluster.Represents an Amazon EKS Access Policy that implements the IAccessPolicy interface.A fluent builder forAccessPolicy.Represents an Amazon EKS Access Policy ARN.Represents the options required to create an Amazon EKS Access Policy using thefromAccessPolicyName()method.A builder forAccessPolicyNameOptionsAn implementation forAccessPolicyNameOptionsProperties for configuring an Amazon EKS Access Policy.A builder forAccessPolicyPropsAn implementation forAccessPolicyPropsRepresents the scope of an access policy.A builder forAccessScopeAn implementation forAccessScopeRepresents the scope type of an access policy.Represents an Amazon EKS Add-On.A fluent builder forAddon.Represents the attributes of an addon for an Amazon EKS cluster.A builder forAddonAttributesAn implementation forAddonAttributesProperties for creating an Amazon EKS Add-On.A builder forAddonPropsAn implementation forAddonPropsConstruct for installing the AWS ALB Contoller on EKS clusters.A fluent builder forAlbController.Options forAlbController.A builder forAlbControllerOptionsAn implementation forAlbControllerOptionsProperties forAlbController.A builder forAlbControllerPropsAn implementation forAlbControllerPropsController version.ALB Scheme.Options for adding worker nodes.A builder forAutoScalingGroupCapacityOptionsAn implementation forAutoScalingGroupCapacityOptionsOptions for adding an AutoScalingGroup as capacity.A builder forAutoScalingGroupOptionsAn implementation forAutoScalingGroupOptionsEKS node bootstrapping options.A builder forBootstrapOptionsAn implementation forBootstrapOptionsCapacity type of the managed node group.A Cluster represents a managed Kubernetes Service (EKS).A fluent builder forCluster.Attributes for EKS clusters.A builder forClusterAttributesAn implementation forClusterAttributesOptions for configuring an EKS cluster.A builder forClusterCommonOptionsAn implementation forClusterCommonOptionsEKS cluster logging types.Properties for configuring a standard EKS cluster (non-Fargate).A builder forClusterPropsAn implementation forClusterPropsOptions for configuring EKS Auto Mode compute settings.A builder forComputeConfigAn implementation forComputeConfigThe type of compute resources to use for CoreDNS.CPU architecture.The default capacity type for the cluster.Construct an Amazon Linux 2 image from the latest EKS Optimized AMI published in SSM.A fluent builder forEksOptimizedImage.Properties for EksOptimizedImage.A builder forEksOptimizedImagePropsAn implementation forEksOptimizedImagePropsEndpoint access characteristics.Defines an EKS cluster that runs entirely on AWS Fargate.A fluent builder forFargateCluster.Configuration props for EKS Fargate.A builder forFargateClusterPropsAn implementation forFargateClusterPropsFargate profiles allows an administrator to declare which pods run on Fargate.A fluent builder forFargateProfile.Options for defining EKS Fargate Profiles.A builder forFargateProfileOptionsAn implementation forFargateProfileOptionsConfiguration props for EKS Fargate Profiles.A builder forFargateProfilePropsAn implementation forFargateProfilePropsOptions for granting access to a cluster.A builder forGrantAccessOptionsAn implementation forGrantAccessOptionsRepresents a helm chart within the Kubernetes system.A fluent builder forHelmChart.Helm Chart options.A builder forHelmChartOptionsAn implementation forHelmChartOptionsHelm Chart properties.A builder forHelmChartPropsAn implementation forHelmChartPropsRepresents an access entry in an Amazon EKS cluster.Internal default implementation forIAccessEntry.A proxy class which represents a concrete javascript instance of this type.Represents an access policy that defines the permissions and scope for a user or role to access an Amazon EKS cluster.Internal default implementation forIAccessPolicy.A proxy class which represents a concrete javascript instance of this type.Represents an Amazon EKS Add-On.Internal default implementation forIAddon.A proxy class which represents a concrete javascript instance of this type.An EKS cluster.Internal default implementation forICluster.A proxy class which represents a concrete javascript instance of this type.Enum representing the different identity types that can be used for a Kubernetes service account.Imported KubectlProvider that can be used in place of the default one created by CDK.Internal default implementation forIKubectlProvider.A proxy class which represents a concrete javascript instance of this type.Options for fetching an IngressLoadBalancerAddress.A builder forIngressLoadBalancerAddressOptionsAn implementation forIngressLoadBalancerAddressOptionsNodeGroup interface.Internal default implementation forINodegroup.A proxy class which represents a concrete javascript instance of this type.EKS cluster IP family.Implementation of Kubectl Lambda.A fluent builder forKubectlProvider.Kubectl Provider Attributes.A builder forKubectlProviderAttributesAn implementation forKubectlProviderAttributesOptions for creating the kubectl provider - a lambda function that executeskubectlandhelmagainst the cluster.A builder forKubectlProviderOptionsAn implementation forKubectlProviderOptionsProperties for a KubectlProvider.A builder forKubectlProviderPropsAn implementation forKubectlProviderPropsRepresents a manifest within the Kubernetes system.A fluent builder forKubernetesManifest.Options forKubernetesManifest.A builder forKubernetesManifestOptionsAn implementation forKubernetesManifestOptionsProperties for KubernetesManifest.A builder forKubernetesManifestPropsAn implementation forKubernetesManifestPropsRepresents a value of a specific object deployed in the cluster.A fluent builder forKubernetesObjectValue.Properties for KubernetesObjectValue.A builder forKubernetesObjectValuePropsAn implementation forKubernetesObjectValuePropsA CloudFormation resource which applies/restores a JSON patch into a Kubernetes resource.A fluent builder forKubernetesPatch.Properties for KubernetesPatch.A builder forKubernetesPatchPropsAn implementation forKubernetesPatchPropsKubernetes cluster version.Launch template property specification.A builder forLaunchTemplateSpecAn implementation forLaunchTemplateSpecThe machine image type.The Nodegroup resource class.A fluent builder forNodegroup.The AMI type for your node group.The Nodegroup Options for addNodeGroup() method.A builder forNodegroupOptionsAn implementation forNodegroupOptionsNodeGroup properties interface.A builder forNodegroupPropsAn implementation forNodegroupPropsThe remote access (SSH) configuration to use with your node group.A builder forNodegroupRemoteAccessAn implementation forNodegroupRemoteAccessWhether the worker nodes should support GPU or just standard instances.IAM OIDC identity providers are entities in IAM that describe an external identity provider (IdP) service that supports the OpenID Connect (OIDC) standard, such as Google or Salesforce.A fluent builder forOidcProviderNative.Initialization properties forOidcProviderNative.A builder forOidcProviderNativePropsAn implementation forOidcProviderNativePropsDeprecated.Deprecated.Initialization properties forOpenIdConnectProvider.A builder forOpenIdConnectProviderPropsAn implementation forOpenIdConnectProviderPropsValues forkubectl patch--type argument.Remote network configuration for hybrid nodes.A builder forRemoteNodeNetworkAn implementation forRemoteNodeNetworkRemote network configuration for pods on hybrid nodes.A builder forRemotePodNetworkAn implementation forRemotePodNetworkFargate profile selector.A builder forSelectorAn implementation forSelectorService Account.A fluent builder forServiceAccount.Options forServiceAccount.A builder forServiceAccountOptionsAn implementation forServiceAccountOptionsProperties for defining service accounts.A builder forServiceAccountPropsAn implementation forServiceAccountPropsOptions for fetching a ServiceLoadBalancerAddress.A builder forServiceLoadBalancerAddressOptionsAn implementation forServiceLoadBalancerAddressOptionsEffect types of kubernetes node taint.Taint interface.A builder forTaintSpecAn implementation forTaintSpec
OidcProviderNativeinstead.