Upgrade AWS Elastic Kubernetes Service (EKS) Cluster Via Terraform 1.22 to 1.23

harshaway

harshaway

Posted on May 22, 2023

Upgrade AWS Elastic Kubernetes Service (EKS) Cluster Via Terraform 1.22 to 1.23

Kubernetes is the new normal when it comes to host your applications.

AWS Elastic Kubernetes service is a managed service where the control plane is deployed in a High Availability and it is completely managed by AWS in the backend allowing the administrators/SRE/DevOps Engineers to manage the data plane and the microservices running as pods.

As of writing the post today Kubernetes community has a three releases per year cadence for the k8s version. On the other hand AWS has their own customized version of Kubernetes(EKS Version) and have their own release cadence. You could find this information at https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html

Note - EKS upgrade is a step upgrade and can be upgraded from one minor version at a time for e.g. 1.22 to 1.23

Managing AWS EKS via terraform helps us to maintain the desired state and it also allows us seamlessly to perform the cluster upgrade.

Pre-requisites in Terraform
Verify that the state file of EKS does not throws any error before the upgrade.
Ensure the state is stored in a remote place such as Amazon S3
Pre-requisites in EKS
Ensure 5 free IP addresses from the VPC subnets of EKS cluster (explained in below section)
Ensure the Kubelet version is same as the control plane version
Verify EKS addons version and upgrade if necessary before the start of cluster upgrade.
Pod Disruption Budget (PDB) some time cause error while draining pods (recommended to disable it while upgrading)
Use an K8s API depreciation finder tool like Pluto to support the API changes on the newer version.
Upgrade Process
https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html

Let me break down the upgrade process that happens when we perform the upgrade. This is a sequential upgrade

Control Plane upgrade
The control plane upgrade is an in-place upgrade means that it launches new control plane with the target version within the same subnet of the existing control plane and that is the where we need atleast 5 free IPs in the EKS subnet to accommodate the new control plane. The new control plane will go through readiness and health check and once passed the new control plane will replace the old plane. This process happens in the backend within AWS infrastructure and there will be no impact to application

Node upgrade
The node upgrade is also an in-place upgrade where it launches new nodes with the target version and the pod from old nodes will get evicted and launched in the new node.

Add-ons upgrade
The addons such as coredns, VPC CNI, kube-proxy etc on your cluster need to be upgraded accrodingly as per the matrix in https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-add-on-update.

Update System Components version (Kube-Proxy, CoreDNS, AWS CNI, Cluster Autoscaler)
Check the system component versions before upgrading. Refer to the below page for the desired version of Kube-Proxy, CoreDNS, and AWS CNI.

Updating an Amazon EKS cluster Kubernetes version - Amazon EKS

Let us take an example of upgrading from 1.22 to 1.23

Step-1:
Ensure control plane and nodes are in same version
kubectl version --short
kubectl get nodes
Step-2:
Before updating your cluster, ensure that the proper Pod security policies are in place. This is to avoid potential security issues
kubectl get psp eks.privileged
Step-3:
Update you target version in your terraform file to the target version say 1.23 and then perform a TF plan and apply

$ vi variables.tf

variable "eks_version" {
   default = "1.23"
   description = "kubernetes cluster version provided by AWS EKS"
}

Enter fullscreen mode Exit fullscreen mode
terraform plan 
terraform apply --auto-approve
Enter fullscreen mode Exit fullscreen mode

Step-4:
Once the control is upgraded,the managed worker nodes upgrade process get invoked automatically. In case you are using the self managed worker nodes upgrade. Choose the AMI as per your control plane version and region in the matrix below https://docs.aws.amazon.com/eks/latest/userguide/retrieve-ami-id.html
Update your worker nodes TF file with the new AMI id and run TF plan and apply

Step-5:
Once control plane and workernodes upgrade were completed. Now it is time to upgrade the addons, see what addons are enabled in your cluster and upgrade each addons via console or eksctl based on how you manage it.
Each addons has the compatiblity matrix from the AWS documentation and it has to be upgraded appropriately
sample ref : https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-add-on-update

Step-6:
it's mandatory to install AWS-EBS_CSI Driver for eks cluster from 1.23 to next versions Follow the below steps to upgrade the EKS cluster to version 1.23.

Need to Deploy Amazon EBS CSI Driver follow below steps.

Creating the Amazon EBS CSI driver IAM role for service accounts - Amazon EKS

Using AWS Management Console:

To create your Amazon EBS CSI plugin IAM role with the AWS Management Console
Open the IAM console at https://console.aws.amazon.com/iam/.

In the left navigation pane, choose Roles.

On the Roles page, choose Create role.

On the Select trusted entity page, do the following:

In the Trusted entity type section, choose Web identity.

For Identity provider, choose the OpenID Connect provider URL for your cluster (as shown under Overview in Amazon EKS).

For Audience, choose sts.amazonaws.com.

Choose Next.

On the Add permissions page, do the following:

In the Filter policies box, enter AmazonEBSCSIDriverPolicy.

Select the check box to the left of the AmazonEBSCSIDriverPolicy returned in the search.

Choose Next.

On the Name, review, and create page, do the following:

For Role name, enter a unique name for your role, such as AmazonEKS_EBS_CSI_DriverRole.

Under Add tags (Optional), add metadata to the role by attaching tags as key–value pairs. For more information about using tags in IAM, see Tagging IAM Entities in the IAM User Guide.

Choose Create role.

After the role is created, choose the role in the console to open it for editing.

Choose the Trust relationships tab, and then choose Edit trust policy.

Find the line that looks similar to the following line:

"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"
Enter fullscreen mode Exit fullscreen mode

if we are using encryption on ebs then we need to sepearte policy and role for the same(attach the steps needed )like below

Copy and paste the following code into the editor, replacing custom-key-arn with the custom KMS key ARN.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kms:CreateGrant",
        "kms:ListGrants",
        "kms:RevokeGrant"
      ],
      "Resource": ["custom-key-arn"],
      "Condition": {
        "Bool": {
          "kms:GrantIsForAWSResource": "true"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey"
      ],
      "Resource": ["custom-key-arn"]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Choose Next: Tags.

On the Add tags (Optional) page, choose Next: Review.

For Name, enter a unique name for your policy (for example, KMS_Key_For_Encryption_On_EBS_Policy).

Choose Create policy.

In the left navigation pane, choose Roles.

Choose the AmazonEKS_EBS_CSI_DriverRole in the console to open it for editing.

From the Add permissions drop-down list, choose Attach policies.

In the Filter policies box, enter KMS_Key_For_Encryption_On_EBS_Policy.

Select the check box to the left of the KMS_Key_For_Encryption_On_EBS_Policy that was returned in the search.

Choose Attach policies.

Managing the Amazon EBS CSI driver as an Amazon EKS add-on
An existing cluster. To see the required platform version, run the following command.

aws eks describe-addon-versions --addon-name aws-ebs-csi-driver
To add the Amazon EBS CSI add-on using eksctl

eksctl create addon --name aws-ebs-csi-driver --cluster my-cluster --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKS_EBS_CSI_DriverRole --force
Enter fullscreen mode Exit fullscreen mode

Updating the Amazon EBS CSI driver as an Amazon EKS add-on
Amazon EKS doesn't automatically update Amazon EBS CSI for your cluster when new versions are released or after you update your cluster to a new Kubernetes minor version. To update Amazon EBS CSI on an existing cluster, you must initiate the update and then Amazon EKS updates the add-on for you.

To update the Amazon EBS CSI add-on using eksctl
Check the current version of your Amazon EBS CSI add-on. Replace my-cluster with your cluster name.

eksctl get addon --name aws-ebs-csi-driver --cluster my-cluster
Enter fullscreen mode Exit fullscreen mode

The example output is as follows.

NAME VERSION STATUS ISSUES IAMROLE UPDATE AVAILABLE

aws-ebs-csi-driver      v1.11.2-eksbuild.1      ACTIVE  0               v1.11.4-eksbuild.1
Enter fullscreen mode Exit fullscreen mode

Update the add-on to the version returned under UPDATE AVAILABLE in the output of the previous step.

eksctl update addon --name aws-ebs-csi-driver --version v1.11.4-eksbuild.1 --cluster my-cluster --force
Enter fullscreen mode Exit fullscreen mode

by the above procedure, ebs-csi-driver installation will be completed.

💖 💪 🙅 🚩
harshaway
harshaway

Posted on May 22, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related