How I built my own SeedBox with K8S

bounteous17

Àlex Serra

Posted on July 28, 2024

How I built my own SeedBox with K8S

We are going to build our self-hosted HA SeedBox for Qbittorrent, one of the most popular decentralized network protocols in the world.

Introduction

You may have noticed that whenever the topic of a P2P network comes up during a conversation, it is giving rise to some kind of allusion to topics that refer to piracy or undesirable content being shared without control. Ultimately, technology is not to blame for misuse, but despite the fact that it is a decentralized network, the network traffic can be blocked by ISPs.

Nice to have:

Prepare the scenario

You've probably already played the Kubernetes game and know what it's all about, but if not, I highly recommend you to discover the best container manager in the world and play with it before.

You can take a look at this public repository I have prepared to make available a Helm chart that can be customized to the needs that each of us have. I'm following this other guide as I write this to publish the repository as a chart package.

I almost forgot to mention that we need to have a NAS server accessible from the network of our Kubernetes cluster. I have my server configured on a Raspberry pi 3 where a HDD has been attached to it and has been published on the network by using Openmediavault.

> showmount -e 192.168.2.10 # list exported paths
Export list for 192.168.2.10:
/export                      192.168.2.0/24
/export/home-lab-nas-runtime 192.168.2.0/24
Enter fullscreen mode Exit fullscreen mode

Sadly the hardware used for the NAS server is a bottleneck, as the local network transfer speed works much faster than the USB 2.0 to which the disk is mounted.

First contact

At the time of writing this Werf does not yet implement all of Helm's functionality, which is why the alternative itself contains a separate Helm installation to cover the actual missing implementations.

> helm version
version.BuildInfo{Version:"v3.15.3", GitCommit:"3bb50bbbdd9c946ba9989fbe4fb4104766302a64", GitTreeState:"clean", GoVersion:"go1.22.5"}
Enter fullscreen mode Exit fullscreen mode
> werf helm version                                                                                                                                                                                             
version.BuildInfo{Version:"v3.14", GitCommit:"", GitTreeState:"", GoVersion:"go1.21.6"}
Enter fullscreen mode Exit fullscreen mode

Fortunately Werf implements the Helm specifications, so if you feel more comfortable using the helm CLI you can continue doing so.

Installing the chart

We have to ways to do it. Both methods read the previously configured values ​​that best suit for my scenario, but you should modify them.

Feel free to edit the .helm/values.yaml file. You will primarily need to modify the volumes related section under the directory structure you have configured on your NAS server.

The chart parameters that can be modified has been documented here.

Werf converge (Recommended)

We can clone the chart git repository and deploy it in an orderly manner. The first joy that werf gives us is that it shows us a detailed output in real time about what is happening with the deployment. With helm this doesn't happen.

If no working path is specified, the default directory is .helm from which to attempt the deployment.

Nelm is the re-written implementation for helm. Unfortunately this does not yet have a dedicated command, so the only way to use it is through the werf command.

> werf converge --dev                                                                                [±master ✓]
Version: v2.6.4
Using werf config render file: /tmp/werf-config-render-2521405566
Starting release "qbittorrent" (namespace: "qbittorrent")
Constructing release history
Constructing chart tree
Processing resources
Constructing new release
Constructing new deploy plan
Starting tracking
Executing deploy plan
┌ Progress status
│ RESOURCE (→READY)                    STATE    INFO
│ Deployment/qbittorrent               WAITING  Ready:0/1
│  • Pod/qbittorrent-648fd97cd7-fbrcw  CREATED  Status:ContainerCreating
│ Ingress/qbittorrent                  READY
│ Service/qbittorrent                  READY
└ Progress status

┌ Progress status
│ RESOURCE (→READY)                    STATE  INFO
│ Deployment/qbittorrent               READY  Ready:1/1
│  • Pod/qbittorrent-648fd97cd7-fbrcw  READY  Status:Running
└ Progress status

┌ Completed operations
│ Create resource: Deployment/qbittorrent
│ Create resource: Ingress/qbittorrent
│ Create resource: Service/qbittorrent
└ Completed operations

Succeeded release "qbittorrent" (namespace: "qbittorrent")
Running time 8.98 seconds
Enter fullscreen mode Exit fullscreen mode

Helm install

Perhaps the most classic way to deploy an application is by using helm. I think there is nothing more to add here, except that comparing the level of detail that the previous option gives us, this option may be more tedious if we need to debug possible errors.

> werf helm repo add home-lab-qbittorrent https://bounteous17.github.io/helm-chart-qbittorrent
"home-lab-qbittorrent" has been added to your repositories
> werf helm search repo home-lab-qbittorrent                                                                                                                                                                    
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION                
home-lab-qbittorrent/qbittorrent        0.1.0           4.6.5-r0-ls334  A Helm chart for Kubernetes
> werf helm install home-lab-qbittorrent home-lab-qbittorrent/qbittorrent
NAME: home-lab-qbittorrent
LAST DEPLOYED: Sun Jul 28 11:18:36 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
Enter fullscreen mode Exit fullscreen mode

Let's make torrents eternal :)

Our application will be available under the ingress host that we have configured from the ingress.host parameter.

Running Qbittorrent application

You will have seen that in the .helm/values.yaml file ​​of this chart these are the default values ​ configured to indicate in which OS path the data that we do not want to disappear if our container is restarted within the cluster should be stored.

Now that we have data persistence assured, we will be able to access this NAS server from clients other than our cluster deployment to read the downloaded data.

If you are using a Linux machine as a second client to access the NAS volume with downloaded content, be sure to check out this guide for optimal setup. Ultimately, the advantage of using this highly performing network protocol is that it is compatible with almost all operating systems.

Indeed, it would be really cool to set up a Jellyfin server now and connect it to the network volume to enjoy the content from the couch ;)

I will be super happy to answer any questions (your skills doesn't matter, don't be afraid) and share with you solutions to any problems you may have encountered during this process.

💖 💪 🙅 🚩
bounteous17
Àlex Serra

Posted on July 28, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

How I built my own SeedBox with K8S
kubernetes How I built my own SeedBox with K8S

July 28, 2024