Install Rancher K3s on Raspberry Pi Cluster
ZachiNachshon
Posted on April 5, 2021
Credits: Logo by cncf-branding
Install a Rancher Labs Kubernetes distribution (k3s) on a Raspberry Pi cluster.
Note: This post refers to laptop / desktop as client machines. These are the clients used to connect to the Raspberry Pi master / worker nodes remotely.
Prerequisites
Master Server
What is a Kubernetes master node? A master node is a server that controls and manages a set of worker nodes, in our case it is the Raspberry Pi that controls the rest of the Raspberry Pi(s) on our cluster.
Install
-
SSH into the Raspberry Pi server that is intended to operate as the Kubernetes master. It should be the one named
kmaster
as instructed by this post
# Connect to RPi server that operates as the k8s master ssh pi@kmaster
-
Run the following command to install a plain version of
k3s
withouttraefik
load balancer andk8s-dashboard
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=" --no-deploy traefik --no-deploy kubernetes-dashboard" sh -
Note: We will install a plain version of
k3s
without Traefik load balancer and/or Kubernetes dashboard. These should be covered by other dedicated blog posts. -
Verify that
k3s
was installed successfully. Run the following commands from within the RPi master server
# Check for status - active (running) sudo systemctl status k3s # Check for status - Ready sudo kubectl get nodes # Optional - check that there are no error logs tail -f /var/log/syslog
-
(Optional): Run when in need to restart
k3s
# Restart k3s sudo systemctl restart k3s
Note: The
k3s
service is automatically started and restarted during installation. The install script will installk3s
and additional utilities, such askubectl
,crictl
,k3s-killall.sh
, andk3s-uninstall.sh
.Note: During installation
kubectl
on the master server will be aliased to the commandk3s kubectl
so that we can use the pre-packaged version ofkubectl
.k3s
uses a container runtime calledcontainerd
directly (nodocker
), interact usingcrictl
.
Uninstall
-
SSH into the
k3s
master server
ssh pi@kmaster
-
Uninstall
k3s
by executing the following script:
/usr/local/bin/k3s-uninstall.sh
Note: Rancher
k3s
service configuration can be found at/etc/rancher/k3s/k3s.yaml
.Note: Container runtime
containerd
configuration can be found at/var/lib/rancher/k3s/agent/etc/containerd/config.toml
.
Worker Node
What is a Kubernetes worker node? These are Raspberry Pi servers that act as workload runtimes i.e. run our applications, jobs, whatever we require them to run but they aren't the ones that manage the cluster, just the ones that "get the job done".
Join a Cluster
-
Extract the
k3s
join cluster token
# SSH to the RPi master server ssh pi@kmaster # Extract the join token sudo cat /var/lib/rancher/k3s/server/node-token # Alternatively, you can run this on-liner directly from a client machine ssh pi@kmaster "sudo cat /var/lib/rancher/k3s/server/node-token"
-
Find the
k3s
master server IP address that is assigned tokmaster
, either from the server itself or from a client machine if you've followed this post
# From the RPi master server ip addr show eth0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1 # Alternatively, from a client machine cat /etc/hosts | grep kmaster | awk '{print $1}'
-
SSH into a Raspberry Pi server intended to be used as a Kubernetes worker node. It would be the one named
knode<number>
as instructed on this post
# Connect to a k3s worker node ssh pi@knode<number>
-
Run the following command to install
k3s-agent
and join the worker node to an existing cluster
# Replace MASTER-IP-ADDRESS with the master server IP address from previous step # Replace JOIN-TOKEN with the join token from previous step curl -sfL http://get.k3s.io | K3S_URL=https://<MASTER-IP-ADDRESS>:6443 \ K3S_TOKEN=<JOIN-TOKEN> sh -
-
Verify that
k3s-agent
was installed successfully. Run the following commands from within the RPi worker server
# Check for status - active (running) sudo systemctl status k3s-agent # Optional - check that there are no error logs tail -f /var/log/syslog
Repeat the above steps for every Raspberry Pi board intended to be used as a Kubernetes worker node
Uninstall
-
SSH into the
k3s
worker node
ssh pi@knode<number>
-
Uninstall
k3s-agent
by executing the following script:
/usr/local/bin/k3s-agent-uninstall.sh
Utilities
These are common utilities that should get installed on client machines to interacts with k3s
master server.
Why do I need to install them?
You'll want to interacts with Kubernetes in order to deploy services, execute Helm charts and/or use utilities that grant you cluster visibility.
Note: If you are planning to interact with
k3s
on a CI environment, make sure that the agent image you are using in the pipeline includes utilities such askubectl
.
kubectl
Install kubectl
, a command-line-interface tool that allows you to run commands against a remote k3s
cluster.
-
On a client machine, create a new empty
k3s
config file
mkdir -p $HOME/.kube/k3s touch $HOME/.kube/k3s/config chmod 600 $HOME/.kube/k3s/config # Set limited user permissions
-
Copy the
k3s
cluster configuration from the RPi master server
ssh pi@kmaster "sudo cat /etc/rancher/k3s/k3s.yaml" > $HOME/.kube/k3s/config
-
Edit the
k3s
config file on the client machine and change the remote IP address of thek3s
master fromlocalhost/127.0.0.1
tokmaster
# Edit master config vim $HOME/.kube/k3s/config # Search for the 'server' attribute located in - # clusters: # - cluster: # server: https://127.0.0.1:6443 or https://localhost:6443 # # Change 'server' value to https://kmaster:6443
Note: Make sure
kmaster
is properly defined as a host name in/etc/hosts
, otherwise - use the RPi master server IP address. -
Install
kubectl
as described in the official docs
# tl;dr - macOS only brew install kubectl # Verify client version kubectl version --client
-
Export
k3s
config file path asKUBECONFIG
environment variable and by doing that set thekubectl
context to use the RPik3s
cluster
export KUBECONFIG=$HOME/.kube/k3s/config
Note: Add the export command into a
.bash_profile
/.bashrc
file. This way every new shell session would have thek3s
cluster config set as thekubectl
active context. Optional: check this post to manage your dotfiles in style. -
Verify that
kubectl
was installed properly and can communicate with the RPi master server
kubectl get nodes # Expect the following respones as success: # # NAME STATUS ROLES AGE VERSION # knodeX Ready <none> 10m v1.20.4+k3s1 # ... # knode1 Ready <none> 23m v1.20.4+k3s1 # kmaster Ready control-plane,master 52m v1.20.4+k3s1
(Optional): read here for additional information about
kubectl
k9s
Install k9s
, a terminal UI that interacts with the k3s
cluster, increase velocity by saving you from typing repetitive commands and/or the need to alias common ones. It allows easy navigation, observation and management - all in one package.
-
Install as instructed on the official repository docs
# tl;dr for macOS brew install k9s
Note: Make sure
$KUBECONFIG
is properly defined and set tok3s
config path. -
In case you are working on a single cluster and it is the
default
one, move to the next step, otherwise, if you are working on multiple clusters and/or using a cluster which isn't nameddefault
, change thecurrentContext
andcurrentCluster
attributes on thek9s
config file to the proper cluster values.Note:
k9s
configuration file can be found at$HOME/.k9s/config.yml
Run
k9s
on a fresh shell session and verify that you can connect thek3s
cluster successfully(Optional): read here for additional information about
k9s
Summary
Well done for successfully installing a Kubernetes cluster on top of your Raspberry Pi cluster ! 👏
What now? Check back for future posts explaining how to install a load balancer, certificate manager and a private docker registry on that cluster.
Please leave your comment, suggestion or any other input you think is relevant to this post in the discussion below.
Like this post?
You can find more by:
Checking out my blog: https://blog.zachinachshon.com
Following me on twitter: @zachinachshon
Thanks for reading! ❤️
Posted on April 5, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.