Creating and managing clusters
This topic covers how to create and delete EKS clusters using Eksctl. You can create clusters with a CLI command, or by creating a cluster configuration YAML file.
Creating a simple cluster
Create a simple cluster with the following command:
eksctl create cluster
That will create an EKS cluster in your default region (as specified by your AWS CLI configuration) with one managed nodegroup containing two m5.large nodes.
eksctl now creates a managed nodegroup by default when a config file isn’t used. To create a self-managed nodegroup,
pass --managed=false to eksctl create cluster or eksctl create nodegroup.
Considerations
-
When creating clusters in
us-east-1, you might encounter anUnsupportedAvailabilityZoneException. If this happens, copy the suggested zones and pass the--zonesflag, for example:eksctl create cluster --region=us-east-1 --zones=us-east-1a,us-east-1b,us-east-1d. This issue may occur in other regions but is less common. In most cases, you won’t need to use the--zoneflag.
Create cluster using config file
You can create a cluster using a config file instead of flags.
First, create cluster.yaml file:
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: basic-cluster region: eu-north-1 nodeGroups: - name: ng-1 instanceType: m5.large desiredCapacity: 10 volumeSize: 80 ssh: allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key - name: ng-2 instanceType: m5.xlarge desiredCapacity: 2 volumeSize: 100 ssh: publicKeyPath: ~/.ssh/ec2_id_rsa.pub
Next, run this command:
eksctl create cluster -f cluster.yaml
This will create a cluster as described.
If you needed to use an existing VPC, you can use a config file like this:
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: cluster-in-existing-vpc region: eu-north-1 vpc: subnets: private: eu-north-1a: { id: subnet-0ff156e0c4a6d300c } eu-north-1b: { id: subnet-0549cdab573695c03 } eu-north-1c: { id: subnet-0426fb4a607393184 } nodeGroups: - name: ng-1-workers labels: { role: workers } instanceType: m5.xlarge desiredCapacity: 10 privateNetworking: true - name: ng-2-builders labels: { role: builders } instanceType: m5.2xlarge desiredCapacity: 2 privateNetworking: true iam: withAddonPolicies: imageBuilder: true
The cluster name or nodegroup name must contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and can’t exceed 128 characters, or you will receive a validation error. For more information, see Create a stack from the CloudFormation console in the AWS CLoudFormation user guide.
Update kubeconfig for new cluster
After the cluster has been created, the appropriate kubernetes configuration will be added to your kubeconfig file.
This is, the file that you have configured in the environment variable KUBECONFIG or ~/.kube/config by default.
The path to the kubeconfig file can be overridden using the --kubeconfig flag.
Other flags that can change how the kubeconfig file is written:
| flag | type | use | default value |
|---|---|---|---|
|
--kubeconfig |
string |
path to write kubeconfig (incompatible with --auto-kubeconfig) |
$KUBECONFIG or ~/.kube/config |
|
--set-kubeconfig-context |
bool |
if true then current-context will be set in kubeconfig; if a context is already set then it will be overwritten |
true |
|
--auto-kubeconfig |
bool |
save kubeconfig file by cluster name |
true |
|
--write-kubeconfig |
bool |
toggle writing of kubeconfig |
true |
Delete cluster
To delete this cluster, run:
eksctl delete cluster -f cluster.yaml
Warning
Use the --wait flag with delete operations to ensure deletion errors are properly reported.
Without the --wait flag, eksctl will only issue a delete operation to the cluster’s CloudFormation stack and won’t wait for its deletion. In some cases, AWS resources using the cluster or its VPC may cause cluster deletion to fail. If your delete fails or you forget the wait flag, you may have to go to the CloudFormation GUI and delete the eks stacks from there.
Warning
PDB policies may block node removal during cluster deletion.
When deleting a cluster with nodegroups, Pod Disruption Budget (PDB) policies can prevent nodes from being removed successfully. For example, clusters with aws-ebs-csi-driver installed typically have two pods with a PDB policy allowing only one pod to be unavailable at a time, making the other pod unevictable during deletion. To successfully delete the cluster in these scenarios, use the disable-nodegroup-eviction flag to bypass PDB policy checks:
eksctl delete cluster -f cluster.yaml --disable-nodegroup-eviction
See the examples/
Dry Run
The dry-run feature enables generating a ClusterConfig file that skips cluster creation and outputs a ClusterConfig file that represents the supplied CLI options and contains the default values set by eksctl.
More info can be found on the Dry Run page.