Homelab: Cluster Architecture

mikeyglitz

mikeyGlitz

Posted on June 14, 2021

Homelab: Cluster Architecture

For a lot of people, 2020 was a rough year would be an understatement. I was fortunate enough to come across a video and some mentorship which motivated me to try and come out of lockdown better than I was before.

I had decided that it was time to pick up a new skill. At the time there was a goal at work to transition our infrastructure from using AWS Elastic Container Service (ECS) to an internal managed Kubernetes-based platform. Having only dabbled in Kubernetes I wanted to know more. I set out to build a home-lab using Kubernetes (k3s). It took me about 6 months to reach a point where I was comfortable with what I had built.

Deciding What to Build

I started out the process by thinking about what I wanted to use the home lab for. I was originally inspired by a blog post I had read about setting up a raspberry pi home lab which set up DNS, OpenLDAP, and NextCloud. I knew similarly that I wanted to have a centralized system for identity management and a place to store files. As a developer who sometimes works on personal projects, I also wanted a local network playground for any personal services I build.

I spent about 2 months pouring over Helm charts and experimenting with Kubernetes for Docker Desktop using my Windows laptop to get as close to an ideal configuration as I could. I ran out of resources and eventually created a 3-node cluster using embedded devices.

I ultimately came up with the following design:

Kubernetes Cluster Architecture

Architectural Explanation

The architecture utilizes K3S as a lightweight alternative to Kubernetes. Due to the decision to leverage low-cost embedded devices, there was an inherit limitation on resources which would be available. A kubernetes distribution which utilized resources efficient would be paramount. Other factors which lead into the selection of K3S is the distribution provided the following features out-of-the-box:

  • Proxying services externally with ServiceLB
  • A one-click installer using k3sup

File Storage

nfs-client-provisioner implements portable networks storage across the Kubernetes cluster using the Network File System (NFS) Protocol. NFS allows for folders to be mounted via TCP protocols as network file shares. nfs-client-provisioner takes things a step further by auto-provisioning NFS volumes whenever a Kubernetes Persistent Volume Claim (PVC) is declared. The PVC will be designated as an NFS client if the nfs storage class is used.
The idea here is that I wanted the Docker containers running in the Kubernetes Pods to write directly to a USB Drive which is mounted onto the Kubernetes master node. nfs-client-provisioner creates a sub-folder in the NFS root with the name of the PVC. Deleted PVCs are designated archived-<pvc-name>.

Networking

By default K3S utilizes traefik as an ingress controller for the Kubernetes cluster. Having used Traefik before, I found setting up a traffic intercept using a project like Oauth-proxy not that straightforward with Traefik. I chose Nginx due to its widespread support, ease of configuration through annotations, its ability to provide proxy for TCP and UDP services, and being able to set up an authentication proxy using Kubernetes annotations on ingress objects.

Oauth2-proxy is a project which provides an authentication and authorization intercept to services where you want to protect pages. Oauth2-proxy leverages the OAUTH2 protocol to delegate authentication. Oauth2-proxy also allows custom configuration for identity management services such as Keycloak.

Identity Management

Having experienced the issue of identity management in past projects (I've literally published 5 web apps which became some iteration of user-profile applications), I found Keycloak to be of particular use when it comes to managing users and federating accounts. Keycloak is an open-sourced enterprise service which manages identity, authentication, authorization, and account federation which is part of the JBoss project and backed by RedHat. Since I wanted to use the Keycloak helm chart the Keycloak service runs using a Postgres backend.

Protecting Application Secrets

Nobody likes a data breach. One of the most common ways sensitive application secrets can be exposed is through use of environment variables, or through data templates. Hashicorp produced an awesome service called Vault which allows you to managed and protect application secrets. Vault is easier to adapt to Kubernetes when using the Banzaicloud Vault Operator. Vault Operator enables the creation of a multi-tenant vault service as well as injection of secrets into Kubernetes Pods using annotations and a service web hook.

NextCloud and Keycloak use Vault Operator to inject a randomly generated database password so that each service could connect to its backend database. Oauth2-proxy containers utilize Vault Operator in order to inject Keycloak client IDs and client secrets into the Ouath2-proxy containers.

Certificates

Along similar lines to the application security discussed in the previous section, unsecure connections are another particular pitfall for sensitive data leak. In Kubernetes, certificates can be managed and generated dynamically using the certificate-manager service.

The home cluster utilizes a self-signed ECDSA certificate to serve as the root certificate. Certificate manager dynamically generates certificates from the root certificate when pods or ingresses request a certificate upon creation.

Monitoring and Observability

The first level of monitoring in the home cluster occurs inside of the service mesh. A service mesh is a reliable service which proxies traffic internally inside of the Kubernetes cluster to provide mTLS security, monitoring, and additional reliability. While Istio is the most popular Kubernetes service mesh, LinkerD was selected due to its ease of installation as well as its ease of use.

Hand-in-hand with the service mesh is the Logging Operator. Logging Operator provides real-time log streaming using FluentD and FluentBit to stream log events to a 3rd party service (Grafana Loki, ELK, etc.). FluentD/FluentBit is injected to a monitored service via side-car injection where the container logs are captured by the fluent agent and streamed to the configured destination service.

Application Hosting

Due to the resource constraints of running on embedded devices, special considerations had to be taken into account for hosting my personal projects. I wanted as little resource overhead as I could possibly require. In the cloud world the term serverless exists for which developers can be concerned about the software they developed, but don't have to consciously consider the infrastructure that it runs on. Services are composed of multiple single-purpose functions which execute and terminate on-demand.

A similar functionality exists for Kubernetes clusters. Originally I had investigated Kubeless as a platform for running serverless applications; however, I ultimately decided on OpenFaaS due to the availability and community support.

Additional Services

Though not in current use, I wanted to be able to index data in my personal projects as a method of allowing data to be searched. I've worked with ElasticSearch and it is my preferred index. Fortunately Elastic also provides Elastic Cloud for Kubernetes (ECK).

ECK enables the deployment of a multi-tenant elastic environment which can be fully expressed using Kubernetes Custom Resource Definitions (CRDs). ECK is fairly easy to set up and can pass all configuration options available to any of the Elastic services (Elasticsearch, Kibana, Beats) directly to the Kubernetes pods.

References

I published the cluster set-up into an Ansible playbook on GitHub

So there it is! I've finally got a reliable application host set up using clustered container orchestration. Let me know if there's any questions, or if there's a particular area where I could go further in detail.

💖 💪 🙅 🚩
mikeyglitz
mikeyGlitz

Posted on June 14, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Homelab: Cluster Architecture
kubernetes Homelab: Cluster Architecture

June 14, 2021