Dev Containers on Kubernetes With DevSpace

fkurz

Friedrich Kurz

Posted on May 14, 2024

Dev Containers on Kubernetes With DevSpace

Motivation

It is probably undisputed that a robust, easy to use, and quickly set up development environment is one of the key drivers of high developer productivity. Typical productivity boosters include, for example,

  • reduced onboarding time;
  • lowered maintenance efforts;
  • homogenization of development environments (e.g. across CPU architectures); and
  • easy access to required configurations and resources.

A major concern when talking about development environments is, of course, packaging and distribution because we want to quickly and reliably ramp up—and also tear down—development environments in order to assure the productivity gains described earlier.

A good way of handling the packaging problem is containerization. This is the case because container images not only may be used to package the tooling required for the development process and allow us to run workloads within a pre-configured environment; but, they may also be versioned, uploaded, and downloaded using established infrastructure components (i.e. image repositories).

Unsurprisingly, there are quite a lot of tools that operate in the space of providing development setup using containers. Some of the more prominent ones include

(I personally refer to these tools as dev container tools.)

DevSpace

DevSpace, the topic of this post, also falls in the category of dev container tools. It has the advantage over the aforementioned choices, however, that it provides a very generic and customizable approach to bootstrapping development environments. Three of the most striking features to me are the

  • configurable out of the box SSH server injection, as well as the
  • two-way sync capability between local host file system and development container file system, and that
  • DevSpace development containers run on Kubernetes.

The first point is great because remote development using SSH is a tried and true approach with lots of tooling support (e.g. VS Code, IntelliJ, and Neovim all support remote development via SSH). Consequently, it lets developers stay flexible w.r.t. their editor/IDE choice.

Having a fast and reliable two-way sync mechanism is also great to have because it gives us quick and easy, albeit limited, persistence (limited to the synced directories of course) without having to configure persistent volumes or having to mount directories as you would using only Docker to run a development container. Since containers should be ephemeral, this is a very easy way to keep changes that you want to persist stored away safely without much additional setup.

As for the last point, running on Kubernetes is a great way to organize and quickly ramp up, as well as tear down, development resources. E.g. we may use Kubernetes namespaces to scope resources for a specific developer; additionally, we may use Kubernetes abstractions to provide and manage access to

  • configuration and credentials,
  • downstream network resources (e.g. giving access to third-party systems via external name services), or
  • physical resources (like GPUs).

Lastly, you gain the capability to conveniently lift and shift your development from a local Kubernetes cluster to a remote Kubernetes cluster by simply changing the Kubernetes configuration.

DevSpace, moreover, is an official Cloud Native Computing Foundation (CNCF) project with over 4k stars on GitHub (at the time of writing) and it is, therefore, very likely to be actively worked on in the foreseeable future.

Basic Development Workflow with DevSpace

💡 This section is a short tutorial illustrating the development workflow with DevSpace. If you want to try it out, please have a look at the proof of concept repository available on my GitHub. It includes set up instructions for starting a dev container on AWS including code for infrastructure setup and an example dev container Dockerfile.

To use DevSpace, we first have to install it. For example, on an Linux/ARM64 machine:

curl -L -o devspace "https://github.com/loft-sh/devspace/releases/latest/download/devspace-linux-arm64" && \
sudo install -c -m 0755 devspace /usr/local/bin
Enter fullscreen mode Exit fullscreen mode

ℹ️ See here for more installation options.

Assuming that we already have access to a Kubernetes cluster and that we have pointed kubectl to use the corresponding context, we should—as a best practice—create a unique namespace—e.g. devspace—for our development environment and then tell DevSpace to use the targeted context and namespace.

$ kubectl create namespace devspace
namespace/devspace created
$ devspace use namespace devspace
info The default namespace of your current kube-context 'kind-kind' has been updated to 'devspace'
         To revert this operation, run: devspace use namespace

done Successfully set default namespace to 'devspace'
$ devspace use context "$(kubectl config current-context)"
done Successfully set kube-context to 'arn:aws:eks:eu-central-1:174394581677:cluster/devspace-eks-QbUEJaxD'
Enter fullscreen mode Exit fullscreen mode

💡 Creating a unique, separate namespace for every developer or feature is a very good way to prevent conflicts. E.g. if we need to change external configuration (e.g. ConfigMaps or Secrets) or change the API of a service during feature development, keeping these changes isolated in a dedicated namespace, prevents breaking the workflow of other developers.

We also need to tell DevSpace what kind of dev container to deploy for us. The way to do this is to use the DevSpace configuration file devspace.yaml. Below is an excerpt from the PoC repository mentioned earlier. With a few omissions for the sake of brevity. (In particular, the .pipelines section that I unfortunately at this point in time only have superficial knowledge on.)

version: v2beta1
name: devspace

# ...

deployments:
  the-dev-container:
    helm:
      chart:
        name: component-chart
        repo: https://charts.devspace.sh
      values:
        containers:
          - image: "${THE_DEV_CONTAINER_IMAGE}"
            imagePullPolicy: IfNotPresent
            resources:
              requests:
                memory: "500Mi"
                cpu: "500m"
              limits:
                memory: "1Gi"
                cpu: "1"

dev:
  the-dev-container:
    imageSelector: "${THE_DEV_CONTAINER_IMAGE}"
    ssh:
      localPort: 60550 
    command: ["sleep", "infinity"]
    sync:
    - path: ./:/home/dev

vars:
  THE_DEV_CONTAINER_IMAGE:
    source: env
    default: ubuntu:22.04
Enter fullscreen mode Exit fullscreen mode

Let's go over the configuration file step by step.

First, the .deployments section essentially defines what resources to deploy to the configured Kubernetes cluster. For our use case, the most pivotal deployment is our dev container. The Pod running our dev container will be deployed using Helm (since we specify an .deployments.the-dev-container.helm object). More specifically, we use the component-chart Helm chart provided by the DevSpace team. We pass values to the Helm chart with the .deployments.the-dev-container.helm.values property. Crucially, the dev container image which is parameterized using the DevSpace variable THE_DEV_CONTAINER_IMAGE specified in the .vars section, so we are able to reuse the DevSpace configuration for different images.

ℹ️ Note that the-dev-container was arbitrarily chosen. You can pick whatever name you like best as long as it conforms to syntax requirements.

ℹ️ The .deployments section, moreover, allows multiple deployment specifications. We could theoretically also deploy ancillary resources like databases using separate Helm charts.

ℹ️ Details on the configurable Helm values for the component-chart Helm chart can be found in the chart documentation.

Moving on to the .dev section. We configure a development configuration for the deployment defined earlier, by adding an .dev.the-dev-container object. We again have to specify the image here (using the THE_DEV_CONTAINER_IMAGE variable). This time, however, it is used as a selector so that DevSpace can find the Kubernetes Pod running the dev container.
More interestingly, we tell DevSpace to allow SSH access to the deployed dev container by adding an .dev.the-dev-container.ssh object. It is worthy of note, that the port may be fixed to a specific local port number (via .dev.the-dev-container.ssh.localPort) so that we don't have to change the configuration of tools trying to connect to our dev container via SSH whenever we redeploy our dev container.

💡 Running devspace dev with enabled SSH will add an entry in ~/.ssh/config similar to the following one:

# DevSpace Start the-dev-container.devspace.devspace
Host the-dev-container.devspace.devspace
HostName localhost
LogLevel error
Port 60550
IdentityFile "/home/lima.linux/.devspace/ssh/id_devspace_ecdsa"
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
User devspace
# DevSpace End the-dev-container.devspace.devspace

DevSpace also creates a public and private key pair for authentication. This allows us to connect via SSH via

ssh -i ~/.devspace/ssh/id_devspace_ecdsa -l devspace -p 11817 localhost

or—using the SSH configuration—by simply running

ssh the-dev-container.devspace.devspace

Additionally, we add a blocking command to the dev container in .dev.the-dev-container.command, so that our pod doesn't terminate right away.
Lastly, the .dev.the-dev-container.sync property tells DevSpace to sync file changes from and to our current working directory (./) to the dev user's home directory on the dev container (/home/dev).

As mentioned earlier, we parameterized the dev container image using DevSpace variables (declared in the .vars section). So, the last thing we have to do before we can actually launch our dev container with DevSpace, ist to build and upload a suitable dev image to a registry that our cluster has access to. (Or, use a public dev container image.)

Let's assume for now that we have built and pushed a dev container image tagged 174394581677.dkr.ecr.eu-central-1.amazonaws.com/devspace-devcontainer:latest and that it is available to the provisioned cluster. We then may utilize the variable THE_DEV_CONTAINER_IMAGE declared in the devspace.yaml above to deploy a dev container with our custom dev container image using the DevSpace CLI's --var option:

$ devspace dev --var THE_DEV_CONTAINER_IMAGE="174394581677.dkr.ecr.eu-central-1.amazonaws.com/devspace-devcontainer:latest"
info Using namespace 'devspace'
info Using kube context 'arn:aws:eks:eu-central-1:174394581677:cluster/devspace-eks-3Hij2z5x'
deploy:the-dev-container Deploying chart /home/lima.linux/.devspace/component-chart/component-chart-0.9.1.tgz (the-dev-container) with helm...
deploy:the-dev-container Deployed helm chart (Release revision: 1)
deploy:the-dev-container Successfully deployed the-dev-container with helm
dev:the-dev-container Waiting for pod to become ready...
dev:the-dev-container Selected pod the-dev-container-devspace-847f75dd44-9httz
dev:the-dev-container sync  Sync started on: ./ <-> /home/dev
dev:the-dev-container sync  Waiting for initial sync to complete
dev:the-dev-container sync  Initial sync completed
dev:the-dev-container ssh   Port forwarding started on: 60550 -> 8022
dev:the-dev-container ssh   Use 'ssh the-dev-container.devspace.devspace' to connect via SSH
Enter fullscreen mode Exit fullscreen mode

💡 Note the lines Sync started on: ./ <-> /home/dev and Port forwarding started on: 60550 -> 8022 that tell us that the continuous two-way sync of the local working directory to the dev container directory /home/dev was established; respectively, that the SSH server is listening on the local port 60550 .

And that's it. 🚀 As a simple test, we may run the hostname command on the remote dev container.

$ ssh the-dev-container.devspace.devspace 'hostname'
the-dev-container-devspace-847f75dd44-9httz
Enter fullscreen mode Exit fullscreen mode

As expected from a container running in a Kubernetes Pod, this will return the Pod identifier.

Assuming sufficient CPU and memory are allocated to the container, we could now continue by attaching shells, and connecting IDE or editor via SSH and start developing. After we're done, we simply detach from the container and run devspace purge to clear up the deployed resources.

Discussion

DevSpace is a CNC-sponsored project that allows us to easily deploy dev containers on a Kubernetes cluster. When compared to other tools for dev container workflows, it has the advantage that it supports

  • connection via SSH, that it has a
  • robust two-way sync mechanism, and that it
  • deploys to Kubernetes.

As shown above, once a Kubernetes cluster and a container image registry that is accessible from the cluster are available, the development workflow is simple and very flexible w.r.t. to the development tooling. Moreover, we may use Kubernetes abstractions to support some of the desired features of a development environment (e.g. resource isolation via namespaces).

DevSpace therefore is a very reasonable choice for development teams that have access to both these infrastructure resources since it allows them to leverage the benefits of dev containers with minimal setup requirements and is very flexible w.r.t. to tooling choice.

💖 💪 🙅 🚩
fkurz
Friedrich Kurz

Posted on May 14, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related