Creating Kubernetes Cluster With CRI-O
Oshi Gupta
Posted on July 30, 2023
Container Runtime Interface (CRI) is one of the important parts of the Kubernetes cluster. It is a plugin interface allowing Kubelet to use different container runtimes. And recently CRI-O container runtime has been announced as a CNCF Graduated project. I thought of writing a blog on CRI-O and how to set up a single-node Kubernetes cluster with Kubeadm and CRI-O.
What is CRI-O?
CRI-O is a lightweight container runtime for Kubernetes. It is an implementation of Kubernetes CRI to use Open Container Initiative (OCI) compatible runtimes for running pods. It supports runc and Kata Containers as the container runtimes, but any OCI-compatible runtime can be integrated.
It is an open-source, community-driven project which supports OCI-based container registries.
It is being maintained by contributors working in Red Hat, Intel, etc. It also comes with a monitoring program known as conmon. Conmon is an OCI container runtime monitor, which makes the communication between CRI-O and runc for a single container.
The below figure shows how CRI-O works with the Kubernetes cluster for creating containers in the pod.
Read more about the architecture of CRI-O here. The networking of the pod is set up through CNI, and CRI-O can be used with any CNI plugin.
Now, let’s see how to set up a Kubernetes cluster with Kubeadm and CRI-O as the container runtime.
Kubernetes Cluster With Kubeadm and CRI-O
In this, we will see how to set up a single-node Kubernetes cluster with Kubeadm and CRI-O as the container runtime. For this, I have used an Ubuntu 22.04 VM with 2 CPUs and 2 GB memory (minimum requirement for Kubeadm). In the last, I have attached a video showing the installation process.
Install Kubeadm, Kubelet, and Kubectl
- First, disable the swap to make kubelet work properly.
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
swapoff -a
- Install Kubeadm , Kubelet, and Kubectl CLI tools. For this, update the apt package index and install packages to use Kubernetes apt repository.
apt-get update
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
- To install a Kubernetes cluster of a specific version, specify the version like below.
apt-get update
apt-get install -y kubelet=1.26.3-00 kubeadm=1.26.3-00 kubectl=1.26.3-00
Here I will be setting up a Kubernetes cluster with version 1.26.3
- Check the version of the CLI tools.
kubeadm version
kubectl version
kubelet --version
- Put a hold on these three tools so that it will not get an update if we update the system.
apt-mark hold kubelet kubeadm kubectl
Install CRI-O
Complete the prerequisites of installing any container runtime.
- Enable br_netfilter and overlay modules and make iptables see bridged traffic.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
- Verify the modules are loaded with the following commands.
lsmod | grep br_netfilter
lsmod | grep overlay
- Check the below-mentioned variables are set to 1 for letting iptables seeing bridged traffic.
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
- Install CRI-O by setting OS and VERSION variables. Set OS according to your system and VERSION according to the Kubernetes cluster you wish to set up. It should be the same as Kubeadm/Kubelet.
OS=xUbuntu_22.04
VERSION=1.26
echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
mkdir -p /usr/share/keyrings
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg
apt-get update
apt-get install cri-o cri-o-runc cri-tools -y
- Start and enable the CRI-O service and check its status.
sudo systemctl start crio.service
sudo systemctl enable crio.service
sudo systemctl status crio.service
- One can also see the runtime info with the following.
crictl info
Set Cluster With Kubeadm
- Pull the images for kubernetes version 1.26.3
kubeadm config images pull --kubernetes-version v1.26.3
kubeadm config images list
- Create the cluster control-plane node.
kubeadm init --kubernetes-version v1.26.3
- Create the config file in the ~/.kube directory to access the kuberentes cluster.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Remove the taint from the control-plane node.
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
- Check the cluster nodes and verify the container runtime is CRI-O.
kubectl get nodes -o wide
As we have completed the process of creating a single node cluster. Now let’s install CNI to create a pod and expose it via service. Also, verify that the pod is running with CRI-O container runtime.
Install CNI
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.13.4 --namespace kube-system
kubectl get pods -n kube-system
Wait till the Cilium pods get into Running state.
- Create a pod with nginx as its image.
kubectl run nginx --image=nginx
kubectl get pods
- Verify CRI-O as container runtime is used in pod creation.
kubectl describe pod nginx | grep -i container
- Expose the pod with the NodePort service.
kubectl expose pod nginx --type=NodePort --port=80
kubectl get svc
- Access the application.
curl http://<NODE_IP>:NODE_PORT
Yay!! A single-node Kubernetes cluster of version 1.26.3 is ready with CRI-O as the container runtime.
Video
Try out Hands-on
You can try the hands-on lab for this blog here at CloudYuga.
References
Connect With Me!!
- Twitter : oshi1136
- LinkedIn : Oshi Gupta
Posted on July 30, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.