K8s cluster with OCI free-tier and Raspberry Pi4 (part 3)

liviux

Ștefănescu Liviu

Posted on February 15, 2023

K8s cluster with OCI free-tier and Raspberry Pi4 (part 3)

This long read is a multiple part tutorial for building a Kubernetes cluster (using k3s) with 4 x OCI free-tier ARM instances and 4 x Raspberry Pi 4. Plus some applications needed for installation (Terraform and Ansible) and a lot of things installed on the cluster.
Part 3 is linking the RPi4 to the k3s cluster on OCI.
GitHub repository is here

Preparing

At this moment I added the OCI machines to the C:\Windows\System32\drivers\etc\hosts file (WSL reads this file and updates it in it's /etc/hosts file). Now my hosts looks like this:

...
192.168.0.201   rpi4-1
192.168.0.202   rpi4-2
192.168.0.203   rpi4-3
192.168.0.204   rpi4-4
140.111.111.213 oci1
140.112.112.35  oci2
152.113.113.23  oci3
140.114.114.22  oci4
Enter fullscreen mode Exit fullscreen mode

And I added them to the Ansible file too (/etc/ansible/hosts). Now this file looks like this:

[big]
rpi4-1  ansible_connection=ssh
[small]
rpi4-2  ansible_connection=ssh
rpi4-3  ansible_connection=ssh
rpi4-4  ansible_connection=ssh
[home:children]
big
small
[ocis]
oci1    ansible_connection=ssh ansible_user=ubuntu
[ociw]
oci2   ansible_connection=ssh ansible_user=ubuntu
oci3   ansible_connection=ssh ansible_user=ubuntu
oci4   ansible_connection=ssh ansible_user=ubuntu
[oci:children]
ocis
ociw
[workers:children]
big
small
ociw
Enter fullscreen mode Exit fullscreen mode

It is not the best naming convention, but it works. Ansible reservers the naming all so if I want to interact with all the objects I can always use ansible -m command all. Test it using ansible -a "uname -a" all. You should receive 8 responses with every Linux installed. Now you can even re-run the update ansible playbook created last part, to update OCI instances too.

K3s can work in multiple ways (here), but for our tutorial we picked High Availability with Embedded DB architecture. This one runs etcd instead of the default sqlite3 and so it's important to have an odd number of server nodes (from official documentation: "An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1.").
Initially this cluster was planned with 3 server nodes, 2 from OCI and 1 from RPi4. But after reading issues 1 and 2 on Github, there are problems with etcd being on server nodes on different networks. So this cluster will have 1 server node (this is how k3s names their master nodes): from OCI and 7 agent nodes (this is how k3s names their worker nodes): 3 from OCI and 4 from RPi4.
First we need to free some ports, so the OCI cluster can communicate with the RPi cluster. Go to VCN > Security List. You need to click on Add Ingress Rule. While I could only open the needed ports for k3s networking (listed here), I decided to open all OCI ports toward my public IP only, as there is no risk involved here. So in IP Protocol select All Protocols. Now you can test if everything if it worked by ssh to any RPi4 and try to ping any OCI machine or ssh to it or try another port.

Netmaker

Now to link all of them together.
We will create a VPN between all of them (and if you want to, plus local machine, plus VPS) using Wireguard. While Wireguard is not the hardest app to install and configure, there's an wonderful app that does almost everything by itself - Netmaker.
On your VPS, or your local machine (if it has a static IP) run sudo wget -qO /root/nm-quick-interactive.sh https://raw.githubusercontent.com/gravitl/netmaker/master/scripts/nm-quick-interactive.sh && sudo chmod +x /root/nm-quick-interactive.sh && sudo /root/nm-quick-interactive.sh and follow all the steps. Select Community Edition (for max 50 nodes) and for the rest pick auto.
Now you will have a dashboard at a auto-generated domain. Open that link that you received at the end of the installation in a browser and create a user and password.
It should have created for you a network. Open Network tab and then open the new network created. If you're ok with it, that's great. I changed the CIDR to something more fancier 10.20.30.0/24 and activated UDP Hole Punching for better connectivity over NAT. Now go to Access Key Tab, select your network and there you should have all your keys to connect.
Netclient, the client for every machine, needs wireguard and systemd installed. Create a new ansible playbook wireguard_install.yml and paste this:

---
- hosts: all
  tasks:
  - name: Install wireguard
    apt:
      name:
        - wireguard
...

Enter fullscreen mode Exit fullscreen mode

Now run ansible-playbook wireguard_install.yml -K -b. To check everything is ok until now run ansible -a "wg --version" all and then ansible -a "systemd --version" all.
Create a new file netclient_install.yml and add this:

---
- hosts: server
  tasks:
  - name: Add the Netmaker GPG key
    shell: curl -sL 'https://apt.netmaker.org/gpg.key' | sudo tee /etc/apt/trusted.gpg.d/netclient.asc

  - name: Add the Netmaker repository
    shell: curl -sL 'https://apt.netmaker.org/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/netclient.list

  - name: Update the package list
    shell: apt update

  - name: Install netclient
    shell: apt install netclient
...

Enter fullscreen mode Exit fullscreen mode

Now run it as usual ansible-playbook netclient_install.yml -K -b. This will install netclient on all hosts. To check, run ansible -a "netclient --version" all.
Last step is easy. Just run ansible -a "netclient join -t YOURTOKEN" -b -K. For the part in brackets, copy your Join Command from Netmaker Dashboard > Access Key. Now all hosts will share a network. This is mine, 11 machines (4 RPi4, 4 OCI instances, my VPS, my WSL and my Windows machine; last 3 are not needed).

netmaker network

Ssh to the OCI server and run: first sudo systemctl stop k3s, then sudo rm -rf /var/lib/rancher/k3s/server/db/etcd and then reinstall but this time with curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-iface=nm-netmaker" INSTALL_K3S_CHANNEL=latest sh -.
For agents will make an ansible playbook workers_link.yml with following content:

---
- hosts: workers
  tasks:
  - name: Install k3s on workers and link to server node
    shell: curl -sfL https://get.k3s.io | K3S_URL=https://10.20.30.1:6443 K3S_TOKEN=MYTOKEN INSTALL_K3S_EXEC=--"flannel-iface=nm-netmaker" INSTALL_K3S_CHANNEL=latest sh -v
...
Enter fullscreen mode Exit fullscreen mode

You have to paste the content from file on server sudo cat /var/lib/rancher/k3s/server/node-token as MYTOKEN, and change ip address of server if you have another. Now run it with ansible-playbook ~/ansible/link/workers_link.yml -K -b.
Finally over. Go back to server node, run sudo kubectl get nodes -owide and you should have 8 results there, 1 master node and 7 worker nodes.

References

Netmaker from here and documentation here;

💖 💪 🙅 🚩
liviux
Ștefănescu Liviu

Posted on February 15, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related