☸️ Kubernetes: From Your Docker-Compose File to a Cluster with Kompose

bcouetil

Benoit COUETIL 💫

Posted on March 9, 2024

☸️ Kubernetes: From Your Docker-Compose File to a Cluster with Kompose

Initial thoughts

When starting a project, the development team typically creates Dockerfiles and a docker-compose file to start locally the part of the stack they are not currently working on. Despite Kubernetes' dominance in container orchestration, the docker-compose file's simplicity for local requirements keeps it relevant.

As the time comes to move from local development to a Kubernetes cluster, you may find yourself with a Docker compose file but limited knowledge of Kubernetes (for now). Perhaps you aim to streamline the initialization of manifests or even aspire to maintain a single docker-compose file for both local and shared Kubernetes environments.

Let's explore the exciting potential of Kompose in the context of a basic web application 🚀

1. About Kompose

As stated on their homepage, with Kompose, you can now push the same file to a production container orchestrator!. The tool definitely covers a wide range of Kubernetes features, among which these are meaningless locally but crucial for kubernetes:

  • ingress
  • volumes
  • secrets
  • resources request/limit
  • variable interpolation with safe defaults

First version of Kompose goes back to June 2016, so they seem to have come a long way until today. There are about 50 releases so far.

2. Example application description and constraints

In this article, we aim to deploy an application consisting of a database, a backend, and a frontend, all from a mono-repository. Let's detail some important needs when working with kubernetes.

To ensure flexibility, the exposed URL (ingress) from the Kubernetes cluster must be configurable. This can be achieved using the kompose.service.expose label.

Configurability of resource requests and limits is mandatory for a production-grade application. This capability has been enabled through the implementation of the deploy.resources section in the docker-compose file.

Effective secrets management using docker-compose is essential too. Kompose handles this aspect seamlessly.

Maintaining the same docker-compose file for both local usage and remote Kubernetes clusters updated through CI/CD requires careful variable interpolation. Key fields for interpolation include:

  • Image name
  • Image tags
  • Application variables
  • Exposed URLs
  • Exposed ports

URLs and ports variables play a vital role in this context. While docker-compose typically operates with everything on localhost using different ports, Kubernetes applications are often exposed on ports 80 (HTTP) / 443 (HTTPS) on various subdomains.

a light blue octopus swimming, a blue whale eating plankton, manga style

3. Associated docker-compose example

Here is the associated docker-compose file matching our above constraints.

version: "3.7"

services:
  app-db:
    image: "postgres:15-alpine"
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: ${PGPWD:-pg_password}

  app-back:
    image: "${IMAGE_GROUP:-app}/back:${IMAGE_TAG:-latest}"
    build:
      context: back/target
      dockerfile: ../Dockerfile # relative to context
    ports:
      - ${INGRESS_PORT:-8080}:8080
    depends_on:
      - app-db
    environment:
      SPRING_DATASOURCE_URL: jdbc:postgresql://app-db:5432/postgres
      SPRING_DATASOURCE_USERNAME: postgres
      SPRING_DATASOURCE_PASSWORD: ${PGPWD:-pg_password}
      SENSITIVE_DATA_FILE: /run/secrets/sensitive_data
    secrets:
      - sensitive_data
    labels:
      kompose.service.expose: "app-${NAMESPACE:-dns}.${ROOT_DNS:-local}/api"
    deploy:
      resources:
        reservations:
          cpus: "0.01"
          memory: 512M

  app-front:
    image: "${IMAGE_GROUP:-app}/front:${IMAGE_TAG:-latest}"
    build:
      context: front/
      dockerfile: Dockerfile # relative to context
    depends_on:
      - app-back
    ports:
      - ${INGRESS_PORT:-80}:80
    labels:
      kompose.service.expose: "app-${NAMESPACE:-dns}.${ROOT_DNS:-local}"
    deploy:
      resources:
        reservations:
          cpus: "0.01"
          memory: 64M

secrets:
  sensitive_data:
    file: sensitive_data.txt
Enter fullscreen mode Exit fullscreen mode

4. Start the stack locally

To begin using the stack locally, you can follow the standard docker-compose process, as variables are interpolated with safe defaults.

$> docker-compose build
$> docker-compose up
[...]
Ctrl+C
Enter fullscreen mode Exit fullscreen mode

5. Generate Kubernetes manifests locally

After a quick and easy installation of kompose, we can generate the manifests:

$> kompose convert --out tmp/k8s/

INFO Kubernetes file "tmp/k8s/app-back-service.yaml" created
INFO Kubernetes file "tmp/k8s/app-db-service.yaml" created
INFO Kubernetes file "tmp/k8s/app-front-service.yaml" created
INFO Kubernetes file "tmp/k8s/sensitive-data-secret.yaml" created
INFO Kubernetes file "tmp/k8s/app-back-deployment.yaml" created
INFO Kubernetes file "tmp/k8s/app-back-ingress.yaml" created
INFO Kubernetes file "tmp/k8s/app-db-deployment.yaml" created
INFO Kubernetes file "tmp/k8s/app-front-deployment.yaml" created
INFO Kubernetes file "tmp/k8s/app-front-ingress.yaml" created
Enter fullscreen mode Exit fullscreen mode

We can then examine one of the generated files. Here is an example:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert --out tmp/k8s/
    kompose.service.expose: app-dns.local/api
    kompose.version: 1.32.0 (765fde254)
  labels:
    io.kompose.service: app-back
  name: app-back
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: app-back
  template:
    metadata:
      annotations:
        kompose.cmd: kompose convert --out tmp/k8s/
        kompose.service.expose: app-dns.local/api
        kompose.version: 1.32.0 (765fde254)
      labels:
        io.kompose.network/app-default: "true"
        io.kompose.service: app-back
    spec:
      containers:
        - env:
            - name: SENSITIVE_DATA_FILE
              value: /run/secrets/sensitive_data
            - name: SPRING_DATASOURCE_PASSWORD
              value: pg_password
            - name: SPRING_DATASOURCE_URL
              value: jdbc:postgresql://app-db:5432/postgres
            - name: SPRING_DATASOURCE_USERNAME
              value: postgres
          image: app/back:latest
          name: app-back
          ports:
            - containerPort: 8080
              hostPort: 8080
              protocol: TCP
          resources:
            requests:
              cpu: 10m
              memory: "536870912"
          volumeMounts:
            - mountPath: /run/secrets/sensitive_data
              name: sensitive_data
      restartPolicy: Always
      volumes:
        - name: sensitive_data
          secret:
            items:
              - key: sensitive_data
                path: sensitive_data
            secretName: sensitive_data
Enter fullscreen mode Exit fullscreen mode

a light blue octopus swimming, a blue whale eating plankton, manga style

6. Deploy with dynamically generated manifests

For those wanting to maintain the docker-compose file as a single source of truth, we can venture into dynamic manifests generation an deployment in CICD.

We obviously have to build and push docker images first, for the Kubernetes cluster to be able to use the manifests.

Each infrastructure context is different; here is an example involving AWS.

.docker-build:
  stage: build
  image: docker:25.0-cli
  variables:
    # project CICD vars: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION
  before_script:
    # authenticate to AWS ECR
    - wget -O /usr/bin/docker-credential-ecr-login https://amazon-ecr-credential-helper-releases.s3.us-east-2.amazonaws.com/0.6.0/linux-amd64/docker-credential-ecr-login && chmod a+x /usr/bin/docker-credential-ecr-login && docker-credential-ecr-login -v
    - mkdir ~/.docker && echo '{"credsStore":"ecr-login"}' > ~/.docker/config.json
    - cat ~/.docker/config.json
    - docker info
  script:
    - docker build -f $MODULE/Dockerfile -t $IMAGE_GROUP/$MODULE:$IMAGE_TAG $DOCKER_WORKING_FOLDER
    - docker push $IMAGE_GROUP/$MODULE:$IMAGE_TAG

back-build:
  stage: build
  extends: .docker-build
  dependencies: [back-package]
  variables:
    MODULE: back
    DOCKER_WORKING_FOLDER: $MODULE/target

front-build:
  stage: build
  extends: .docker-build
  dependencies: [front-package]
  variables:
    MODULE: front
    DOCKER_WORKING_FOLDER: $MODULE
Enter fullscreen mode Exit fullscreen mode

And here is a deployment job using GitLab CI. Script includes additional commands to show pod logs and wait for the deployment completion.

deploy:
  stage: deploy
  image: alpine/k8s:1.29.1
  variables:
    NAMESPACE: $CI_COMMIT_REF_SLUG
  before_script:
    # init namespace
    - kubectl config use-context $KUBE_CONTEXT
    - kubectl create namespace $NAMESPACE || true
    # download tools
    - curl --show-error --silent --location https://github.com/stern/stern/releases/download/v1.22.0/stern_1.22.0_linux_amd64.tar.gz | tar zx --directory /usr/bin/ stern && chmod 755 /usr/bin/stern && stern --version
    - curl --show-error --silent --location https://github.com/kubernetes/kompose/releases/download/v1.32.0/kompose-linux-amd64 -o /usr/local/bin/kompose && chmod a+x /usr/local/bin/kompose && kompose version
    # show logs asynchronously. Timeout to avoid hanging indefinitely when an error occurs in script section
    - timeout 1200 stern -n $NAMESPACE "app-" --tail=0 --color=always & # in background, tail new logs if any (current and incoming) pod with this regex as name
    - timeout 1200 kubectl -n $NAMESPACE get events --watch-only & # in background, tail new events in background
  script:
    # first delete CrashLoopBackOff pods, polluting logs
    - kubectl -n $NAMESPACE delete pod `kubectl -n $NAMESPACE get pods --selector app.kubernetes.io/component=$MODULE | awk '$3 == "CrashLoopBackOff" {print $1}'` || true
    # now deploying
    - kompose convert --out k8s/
    - kubectl apply -n $NAMESPACE -f k8s/
    - echo -e "\e[93;1mWaiting for the new app version to be fully operational...\e[0m"
    # waiting for successful deployment
    - kubectl -n $NAMESPACE rollout status deploy/app-db
    - kubectl -n $NAMESPACE rollout status deploy/app-back
    - kubectl -n $NAMESPACE rollout status deploy/app-front
    # on any error before this line, the script will still wait for these threads to complete, so the initial timeout is important. Adding these commands to after_script does not help
    - pkill stern || true
    - pkill kubectl || true
  after_script: # show namespace content
    - kubectl config use-context $KUBE_CONTEXT
    - kubectl -n $NAMESPACE get deploy,service,ingress,pod
Enter fullscreen mode Exit fullscreen mode

Wrapping up

Utilizing tools like Kompose can greatly simplify the process of transitioning from local development with Docker-compose to deploying applications on a Kubernetes cluster. By enabling seamless conversion of Docker-compose files into Kubernetes manifests, Kompose allows developers to maintain a single source of truth for their deployment configurations. This streamlines the deployment process and ensures consistency between local and production environments.

With its support for key Kubernetes features like ingress, volumes, secrets, and resource management, Kompose offers a valuable solution for developers looking to leverage the power of Kubernetes without the need for extensive knowledge of the platform 🚀

We have just started using Kompose seriously, feedbacks from the trenches are yet to come. This article will get updated if and when needed. In the meantime, feel free to share your own experience with it in the comments below 🤓

a light blue octopus swimming, a blue whale eating plankton, manga style

Illustrations generated locally by Pinokio using Stable Cascade plugin

Further reading

This article was enhanced with the assistance of an AI language model to ensure clarity and accuracy in the content, as English is not my native language.

💖 💪 🙅 🚩
bcouetil
Benoit COUETIL 💫

Posted on March 9, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related