Building a Kubernetes v1.28 Cluster using kubeadm

iamunnip

Unni P

Posted on August 23, 2023

Building a Kubernetes v1.28 Cluster using kubeadm

In this article, we will look how we can set up a three node Kubernetes v1.28 cluster using kubeadm

Introduction

  • kubeadm is a tool used to create Kubernetes clusters

  • It automates the creation of Kubernetes clusters by bootstrapping the control plane, joining the nodes etc

  • Follows Kubernetes release cycle

  • Open-source tool maintained by the Kubernetes community

Prerequisites

  • Create three Ubuntu 20.04 LTS instances for the control plane, node-1 and node-2

  • Each instance has a minimum specification of 2 CPU and 2 GB RAM

  • Networking must be enabled between instances

  • Required ports must be allowed between instances

  • Swap must be disabled on instances

Initial Configuration

Set up unique hostnames on the control-plane, node-1 and node-2

Once the hostnames are set, log out from the current session and log back in to reflect the changes

# control-plane

sudo hostnamectl set-hostname control-plane
Enter fullscreen mode Exit fullscreen mode
# node-1

sudo hostnamectl set-hostname node-1
Enter fullscreen mode Exit fullscreen mode
# node-2

sudo hostnamectl set-hostname node-2
Enter fullscreen mode Exit fullscreen mode

Update the hosts file on the control-plane, node-1 and node-2 to enable communication via hostnames

# control-plane, node-1 and node-2

sudo vi /etc/hosts

172.31.91.254 control-plane
172.31.94.177 node-1
172.31.87.11 node-2
Enter fullscreen mode Exit fullscreen mode

Disable swap on control-plane, node-1 and node-2 and if a swap entry is present in the fstab file then comment out the line

# control-plane, node-1 and node-2

sudo swapoff -a

sudo vi /etc/fstab
  # comment out swap entry
Enter fullscreen mode Exit fullscreen mode

To set containerd as our container runtime on control-plane, node-1 and node-2, first, we need to load some Kernel modules and modify system settings

# control-plane, node-1 and node-2

cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay

sudo modprobe br_netfilter
Enter fullscreen mode Exit fullscreen mode
# control-plane, node-1 and node-2

cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

Installation

Once the Kernel modules are loaded and the system settings are modified, now we can install containerd runtime on control-plane, node-1 and node-2

# control-plane, node-1 and node-2

sudo apt update

sudo apt install -y containerd
Enter fullscreen mode Exit fullscreen mode

Once the packages are installed, generate a default configuration file for containerd on control-plane, node-1 and node-2 and restart the containerd service

# control-plane, node-1 and node-2

sudo mkdir -p /etc/containerd

sudo containerd config default | sudo tee /etc/containerd/config.toml

sudo systemctl restart containerd
Enter fullscreen mode Exit fullscreen mode

We need to install some prerequisite packages on control-plane, node-1 and node-2 for configuring the Kubernetes package repository

# control-plane, node-1 and node-2

sudo apt update

sudo apt install -y apt-transport-https ca-certificates curl
Enter fullscreen mode Exit fullscreen mode

Download the Google Cloud public signing key and configure Kubernetes apt repository on control-plane, node-1 and node-2

# control-plane, node-1 and node-2

sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

Install kubeadm, kubelet and kubectl tools and hold their package version on control-plane, node-1 and node-2

# control-plane, node-1 and node-2

sudo apt update

sudo apt install -y kubeadm kubelet kubectl

sudo apt-mark hold kubeadm kubelet kubectl
Enter fullscreen mode Exit fullscreen mode

Initialize the cluster by executing the below command on control-plane

# control-plane

sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.28.0
Enter fullscreen mode Exit fullscreen mode

Once the installation is completed, set up our access to the cluster on control-plane

# control-plane

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Verify our cluster status by listing the nodes

But our nodes are in a NotReady state because we haven’t set up networking

# control-plane

kubectl get nodes
NAME            STATUS     ROLES           AGE   VERSION
control-plane   NotReady   control-plane   40s   v1.28.0
Enter fullscreen mode Exit fullscreen mode

Install the Calico network addon to the cluster and verify the status of the nodes

# control-plane

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
Enter fullscreen mode Exit fullscreen mode
# control-plane

kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
control-plane   Ready    control-plane   101s   v1.28.0
Enter fullscreen mode Exit fullscreen mode

Once the networking is enabled, join our workload nodes to the cluster

Get the join command from the control-plane

# control-plane

kubeadm token create --print-join-command
Enter fullscreen mode Exit fullscreen mode

Once the join command is retrieved from the control-plane, execute them in node-1 and node-2

# node-1 and node-2

sudo kubeadm join 172.31.91.254:6443 --token o3in76.aeqii9shr86cem2w --discovery-token-ca-cert-hash sha256:e301651b8930363842b054bafec26aba718dbc724d903c4c73228703622dc5f1
Enter fullscreen mode Exit fullscreen mode

Verify our cluster and all the nodes will be in a Ready state

# control-plane

kubectl get nodes
NAME            STATUS   ROLES           AGE     VERSION
control-plane   Ready    control-plane   2m55s   v1.28.0
node-1          Ready    <none>          28s     v1.28.0
node-2          Ready    <none>          19s     v1.28.0
Enter fullscreen mode Exit fullscreen mode

Application Deployment

Deploy an Nginx pod, expose it as ClusterIP and verify its status

# control-plane

kubectl run nginx --image=nginx --port=80 --expose
service/nginx created
pod/nginx created

kubectl get pod nginx -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          17s   192.168.247.1   node-2   <none>           <none>

kubectl get svc nginx
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   10.97.157.101   <none>        80/TCP    38s
Enter fullscreen mode Exit fullscreen mode

That's all for now

Reference

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

💖 💪 🙅 🚩
iamunnip
Unni P

Posted on August 23, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related