How to create a Kubernetes cluster on Alpine Linux
Dave
Posted on May 19, 2020
this post will help you understand kubeadm, kubelet flags, and nuances of alpine
Creating a production-ready K8s cluster is almost a breeze nowadays on most cloud platforms so I was curious to see how hard it'd be to create a cluster from scratch on my own set of VMs... turns out not very hard.
To accomplish this, you can either do it the hard way or use some automation. You're presented with two options:
- kubespray which uses Ansible under the hood
- kubeadm which is the official way to do it, part of k/k, and supported by amazing k8s team of VMWare
As kubeadm's binary already comes with kubernetes package on Alpine, I decided to go with that option.
I already had KVM installed on my machine and had an Alpine Linux 3.9 VM ready to go, so you need to pause here and provision your VMs if you haven't already before you proceed.
Once you have your VM, you need to add community and testing repositories so you can get the needed binaries for Kubernetes and Docker packages:
# echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing/" >> /etc/apk/repositories
# echo "@community http://dl-cdn.alpinelinux.org/alpine/edge/community/" >> /etc/apk/repositories
then install required packages with:
# apk add kubernetes@testing
# apk add docker@community
# apk add cni-plugins@testing
at this point when I tried to start my docker service, I'd get an error:
# service docker start
supervise-daemon: --pidfile must be specified
failed to start Docker Daemon
ERROR: docker failed to start
This is apparently a bug on part of supervise-daemon
and I created a merge request for this issue to alpine/aports but apparently this issue has been solved in newer versions of Alpine. In case you still run into this, you need to edit your /etc/init.d/docker
file, add pidfile="/run/docker/docker.pid"
and inside start_pre
block add mkdir -p /run/docker
.
Now you can duplicate your VM in KVM, and name the new one worker-1
:
# hostname worker-1
# echo "worker-1" > /etc/hostname
make sure to do the same steps for master node but with the name master-1
.
You're ready to create your control-plane on master node, run:
# kubeadm init --apiserver-advertise-address=[ Master Node's IP Here ] --kubernetes-version=1.17.5
Kubeadm runs in phases, and it was crashing when reaching:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
Running another terminal (using SSH) and restarting the kubelet service fixed this issue. Turns out, Kubeadm starts the kubelet service first and then writes the config files it needs to start properly. On other OSes such as Ubuntu, Systemd -the OS init system- takes care of restarting the crashing service until the config files are there and kubelet can be run.
Alpine on the other hand, uses OpenRC as its init system which doesn't restart on crash loops. For that Gentoo community has introduced supervise-daemon
which is experimental at the moment. To make this possible on Alpine, we fixed this issue directly on kubeadm with this PR.
Once kubeadm
runs it course, it gives you two notes, one is the location of your kube config file. This is the file that kubectl
uses to authenticate to API server on every call. You need to copy this file on any machines that needs to interact with cluster using kubectl
.
Another one is a join statement like below, which is how you'll add your worker nodes to the cluster. First add your CNI on master node and then join from worker node:
# on master node
master-1 # kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# on worker node
worker-1 # kubeadm join 192.168.122.139:6443 --token hcexp0.qiaxub64z17up9rn --discovery-token-ca-cert-hash sha256:05653259a076769faa952024249faa9c9457b4abf265914ba58f002f08834006
Note:
Your join command should succeed now but when I initially tried this command, my kubelet service would again fail to start because config files were missing and, surprisingly, restarting kubelet service didn't help this time. (Shocking, I know!)
After some investigation I realized another mismatch between Systemd and OpenRC, --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
was missing from /etc/conf.d/kubelet
and adding it fixed this but didn't specify CNI and my pods would get Docker IPs. You guessed it right, another kubelet argument missing. (See the full changes that were necessary here)
At this point you can deploy your workloads, and if you come from Ubuntu world one subtle difference is that you need to make sure you apps are compatible with musl as opposed to glibc. For example if you're deploying Go static binaries, make sure you're compiling with CGO_ENABLED=0
to create a statically-linked binary or if you're deploying node apps, make sure your npm install
is being run inside an Alpine container.
That's it! Feel free to reach out to me if you need help with your k8s clusters.
Posted on May 19, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.