Kuma Meshes Head-On - A beginners guide

jofisaes

João Esperancinha

Posted on April 12, 2024

Kuma Meshes Head-On - A beginners guide

To quickly start learning Kuma, on of the most important things we need is cluster. Then we also need a command to find out the status of our pods in kubernets (aka k8s), we also need to be able to install Kuma and finally we also need to be able to issue some Kuma commands.

This is a long way of saying that we need to install 4 essential commands in order to make everything ready for Kuma. These commands are:

  • kind - This is also known as Kubernetes in Docker. This is a command that leverages the weight of creating stuff with only kubectl.
  • kubectl - Probaly the most expect one of this list, if you are already used to working with k8s. This is how we can issue commands to our k8s cluster.
  • helm - Helm allows us to execute some very handy scripts that allow amongs others, the installation of the Kuma control plane.
  • kumactl - We will no be using this command very often in this guide, but it is important to be aware on how to use it.

This guide will let you know how to do this in Ubuntu. All of this has been tested in an Ubuntu system. If you are interested in a guide on how to install this in Mac-OS or Windows or any other operating system you may have please give me a shout out at my YouTube channel JESPROTECH community.


I. Installing the commands

Image description


Kind (k8s) in Docker

In order to install kind, we need to issue these commands:

[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Enter fullscreen mode Exit fullscreen mode

It is important to note that the command kind will be installed in you /usr/local/bin/kind. This may vary per system, even withing Linux distributions.


Installing certificates and GPG keys

Both helm and kubectl commands need to be installed with the presence of certain GPG keys. This is how we can add them to our local repository of our linux apt distribution:

sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
Enter fullscreen mode Exit fullscreen mode

kubectl

The installation of Kubectl is very easy once the previous step is complete:

sudo apt-get install -y kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

The commands kubelet, kubeadm and kubectl aren't mandatory, but it is a good idea to install them.


helm

As you may have already guessed, helm is also now very easy to install:

sudo apt-get install -y helm
Enter fullscreen mode Exit fullscreen mode

kuma

Kuma installation can be a bit cumbersome because it involves one manual step, but first we need to download our dependencies:

cd ~ || exit;
curl -L https://kuma.io/installer.sh | VERSION=2.6.1 sh -
Enter fullscreen mode Exit fullscreen mode

Be sure to be in your HOME folder before issuing this command. It is important to have Kuma installed in a place where it is easily accessible and easily spotted should we, for example, decide to remove it.

Once we are done with that, it is also very important to add the bin folder to our PATH:

export PATH=~/kuma-2.6.1/bin:$PATH;
Enter fullscreen mode Exit fullscreen mode

Adding this line to the end or anywhere in between your star up script will make this process easy. Your startup script may be any of these .bashrc, .zshrc, .profile and possibly take another form.


k9s

Installing k9s is also quite different from other applications. In this case we can either use pacman or brew for Linux. I have used brew mostly for Mac-OS and hardly ever needed it in Linux, but in this case it is very much needed and so to do that first we need to install brew like this:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Enter fullscreen mode Exit fullscreen mode

Once the brew installation completes, all we have to do is to install k9s("kanines"):

brew install derailed/k9s/k9s
Enter fullscreen mode Exit fullscreen mode

One thing that is important to take into account, and you'll probably notice this once you installed and start running k9s for the first time, is that k9s will crash if a cluster that it is monitoring gets removed and/or added.


II. Creating the cluster

kind create cluster --name=wlsm-mesh-zone
kubectl cluster-info --context kind-wlsm-mesh-zone
Enter fullscreen mode Exit fullscreen mode

The first command creates a cluster named wlsm-mesh-zone. This is just a cluster that we will use to install Kuma.
The second command is used to check the status of the cluster.


III. Creating a local docker registry

As I mentioned before, we can create a docker registry quite easily. As easy it may sound to create it, the script to do this is a handful. So the best thing to do is to just copy past what kind already has available on their web-site. Here we can download this script:

#!/bin/sh
# Original Source
# https://creativecommons.org/licenses/by/4.0/
# https://kind.sigs.k8s.io/docs/user/local-registry/
set -o errexit

# 1. Create registry container unless it already exists
reg_name='kind-registry'
reg_port='5001'
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
  docker run \
    -d --restart=always -p "127.0.0.1:${reg_port}:5000" --network bridge --name "${reg_name}" \
    registry:2
fi

# 2. Create kind cluster with containerd registry config dir enabled
# TODO: kind will eventually enable this by default and this patch will
# be unnecessary.
#
# See:
# https://github.com/kubernetes-sigs/kind/issues/2875
# https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
# See: https://github.com/containerd/containerd/blob/main/docs/hosts.md
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry]
    config_path = "/etc/containerd/certs.d"
EOF

# 3. Add the registry config to the nodes
#
# This is necessary because localhost resolves to loopback addresses that are
# network-namespace local.
# In other words: localhost in the container is not localhost on the host.
#
# We want a consistent name that works from both ends, so we tell containerd to
# alias localhost:${reg_port} to the registry container when pulling images
REGISTRY_DIR="/etc/containerd/certs.d/localhost:${reg_port}"
for node in $(kind get nodes); do
  docker exec "${node}" mkdir -p "${REGISTRY_DIR}"
  cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml"
[host."http://${reg_name}:5000"]
EOF
done

# 4. Connect the registry to the cluster network if not already connected
# This allows kind to bootstrap the network but ensures they're on the same network
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then
  docker network connect "kind" "${reg_name}"
fi

# 5. Document the local registry
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-registry-hosting
  namespace: kube-public
data:
  localRegistryHosting.v1: |
    host: "localhost:${reg_port}"
    help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
Enter fullscreen mode Exit fullscreen mode

This script can be found on the root folder of the project.
And to install the local docker registry, we only need to run this bash script.


IV. How the code has been created

There may be a lot to be said about the code that I have provided for the example for this blog post. However, in this case, let's just focus on few key aspects. Let's start from the listener service to the collector and then to the database.
When we run the services locally, or even using a docker-compose configuration to get the containers going, usually we use the dns attributed names, which automatically get assigned to be the container name or the name that we configure with hostname. With k8s there is also a set of rules that make the host names available throughout the cluster. Let's have a look the listener and collector examples:


Listener example

The listener is an application developed in Java using the Spring framework. Like all application created this way, there is also an application.properties file:

spring.application.name=wlsm-listener-service
server.port=8080
spring.main.web-application-type=reactive
spring.webflux.base-path=/app/v1/listener

wslm.url.collector=http://localhost:8081/api/v1/collector
Enter fullscreen mode Exit fullscreen mode

In all of these properties, the most important one to focus on for the moment is the wslm.url.collector property. With the default configuration, we can run this service locally without the need to use any containerized environment. However, in the k8s cluster, we need to be able to access the collector and for that we have a prod profile with the definition file application-prod.properties:

wslm.url.collector=http://wlsm-collector-deployment.wlsm-namespace.svc.cluster.local:8081/api/v1/collector
Enter fullscreen mode Exit fullscreen mode

This property tries to reach host wlsm-collector-deployment.wlsm-namespace.svc.cluster.local. This file follows this configuration:

<Service Name>.<Namespace>.svc.cluster.local

We've got 5 dot separated elements. The last three are static and the first two depend on the machine we are trying to reach. On the left we place the service name followed by the namespace. This is important to understand how the containers are connected to each other within the cluster.

The part of the code that is interesting to have a look at is of course the controller and the service. The controller looks like this:

@RestController
@RequestMapping
public class ListenerController {
    private final ListenerService listenerService;
    ListenerController(ListenerService listenerService) {
        this.listenerService = listenerService;
    }
    @GetMapping("info")
    public String info() {
        return "Listener Service V1";
    }
    @PostMapping("create")
    public Mono<AnimalLocationDto> sendAnimalLocation(
            @RequestBody AnimalLocationDto animalLocationDto) {
        return listenerService.persist(animalLocationDto);
    }
}
Enter fullscreen mode Exit fullscreen mode

And the service looks like this:

@Service
public class ListenerService {
    @Value("${wslm.url.collector:http://localhost:8080}")
    private String collectorUrl;
    private final WebClient client = WebClient.create(collectorUrl);
    HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
    List<AnimalLocationDto> cache = hazelcastInstance.getList("data");
    public Mono<AnimalLocationDto> persist(AnimalLocationDto animalLocationDto) {
        cache.add(animalLocationDto);
        return client.post()
                .uri(collectorUrl.concat("/animals"))
                .contentType(MediaType.APPLICATION_JSON)
                .bodyValue(animalLocationDto)
                .retrieve()
                .bodyToMono(AnimalLocationDto.class);
    }
}
Enter fullscreen mode Exit fullscreen mode

As you may have already noticed, this first application, like all of the applications implemented using the Spring Framework in this repository are all reactive and they all use netty instead of tomcat.
For the moment, we can ignore the hazelcast usage in this code. This will be used for later versions of this project.


Collector example

The collector works in exactly in the same way as the listener at this point. Its only duty for now is to relay data from the listener to the database and to do that, the collector only needs to know exactly where the database is. Let's make the same analysis on the application.properties file of this project:

spring.application.name=wlsm-collector-service
server.port=8081
spring.main.web-application-type=reactive
spring.webflux.base-path=/api/v1/collector

spring.r2dbc.url=r2dbc:postgresql://localhost:5432/wlsm
spring.r2dbc.username=admin
spring.r2dbc.password=admin

spring.data.r2dbc.repositories.naming-strategy=org.springframework.data.relational.core.mapping.BasicRelationalPersistentEntityNamingStrategy
spring.data.r2dbc.repositories.naming-strategy.table=org.springframework.data.relational.core.mapping.SnakeCaseNamingStrategy
spring.data.r2dbc.repositories.naming-strategy.column=org.springframework.data.relational.core.mapping.SnakeCaseNamingStrategy
Enter fullscreen mode Exit fullscreen mode

These properties are the minimum required to get the service going. However this is only to able to run it local. And for this service, we also have prod profile file and we can have a look at it in application-prod.properties over here:

spring.r2dbc.url=r2dbc:postgresql://wlsm-database-deployment.wlsm-namespace.svc.cluster.local:5432/wlsm
Enter fullscreen mode Exit fullscreen mode

The database connection is in this case referring to the host of the database:

wlsm-database-deployment.wlsm-namespace.svc.cluster.local

Which again follows the same analysis as we have seen before. To the left we see the sevice name, followed by the namespace appending that at the end with svc.cluster.local.

And for this service we also use a controller and a service. The controller looks like this:

@RestController
@RequestMapping
class CollectorController(
    val collectorService: CollectorService
) {
    @PostMapping("animals")
    suspend fun listenAnimalLocation(@RequestBody animalLocationDto: AnimalLocationDto): AnimalLocationDto = run {
        collectorService.persist(animalLocationDto)
        animalLocationDto
    }
}
Enter fullscreen mode Exit fullscreen mode

And the service looks like this:

@Service
class CollectorService(
    val applicationEventPublisher: ApplicationEventPublisher
) {
    fun persist(animalLocationDto: AnimalLocationDto) =
        applicationEventPublisher.publishEvent(AnimalLocationEvent(animalLocationDto))

}
Enter fullscreen mode Exit fullscreen mode

The service uses and event publisher that is called applicationEventPublisher, that follows an event streaming architecture that gets handled later on in this event listener, which we can readily see that it uses r2dbc to keep in the reactive architecture implementations paradigms:

@Service
class EventHandlerService(
   val animalLocationDao: AnimalLocationDao
) {
    @EventListener
    fun processEvent(animalLocationEvent: AnimalLocationEvent){
        println(animalLocationEvent)
        runBlocking(Dispatchers.IO) {
            animalLocationDao.save(animalLocationEvent.animalLocationDto.toEntity())
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

V. Deploy scripts

Deploying is normally a very straightforward task to do with k8s. However it is also important to have a look at the configuration needed for our services. For example let's have a look at the listener implementation:

apiVersion: v1
kind: Namespace
metadata:
  name: wlsm-namespace
  labels:
    kuma.io/sidecar-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wlsm-listener
  namespace: wlsm-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wlsm-listener
  template:
    metadata:
      labels:
        app: wlsm-listener
    spec:
      containers:
        - name: wlsm-listener-service
          image: localhost:5001/wlsm-listener-service:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: wlsm-listener-deployment
  namespace: wlsm-namespace
spec:
  selector:
    app: wlsm-listener
  ports:
    - protocol: TCP
      appProtocol: http
      port: 8080
Enter fullscreen mode Exit fullscreen mode

There are three blocks in this configuration. The first block is the namespace block. The namespace configuration is crucial to allow kuma to be able to inject the envoy sidecars that it needs to apply policies. Without a defined namespace, kuma will not be able to do this. The other thing that we need to pay attention when configuring kuma is that the namespace must containe the proper label that kuma will recognize:

kuma.io/sidecar-injection: enabled

The namespace definition with the correct label is vital to get kuma working. In the second block we find the definition of the deployment. This is how we define how the deployment of our pod is going to look like in our kubernetes cluster. What is important here to focus on is the image, the imagePullPolicy and the containerPort. The image is the complete tag of the Docker image we are using. The port that gets configure for our docker registry created with kind is 5001 and this is included in the tag for our image. It works as a tag but also as a connection to our Docker registry. That way we can pull the images and create our container to run in our kubernetes environment.

But, of course, to be able to use images, we need to create them and for that let's take a look at how is that done in the listener example and the database example. The docker image for the listener is defined like this:

FROM eclipse-temurin:21-jdk-alpine

WORKDIR /root

ENV LANG=C.UTF-8

COPY entrypoint.sh /root

COPY build/libs/wlsm-listener-service.jar /root/wlsm-listener-service.jar

ENTRYPOINT ["/root/entrypoint.sh"]
Enter fullscreen mode Exit fullscreen mode

This is all starting from a base imaged called eclipse-temurin:21-jdk-alpine. After this we just copy the jar created by building the project and then making a copy of it into our image. Before of that we copy the entrypoint.sh to the container as well and define the ENTRYPOINT to use it. The entrypoint simply calls the jar like this:

#!/usr/bin/env sh
java -jar -Dspring.profiles.active=prod wlsm-listener-service.jar
Enter fullscreen mode Exit fullscreen mode

The database service is quite different because it uses a few scripts that are opensource and available online:

FROM postgres:15

COPY . /docker-entrypoint-initdb.d

COPY ./multiple /docker-entrypoint-initdb.d/multiple

ENV POSTGRES_USER=admin
ENV POSTGRES_PASSWORD=admin
ENV POSTGRES_MULTIPLE_DATABASES=wlsm

EXPOSE 5432
Enter fullscreen mode Exit fullscreen mode

This script makes a copy of the following file and folder to the docker init directory: create-multiple-postgresql-databases.sh and multiple. Finally we simply define the variables used in those scripts to define our database and username/password combinations.

The database is created using the following schema:

CREATE TABLE families(
    id uuid DEFAULT gen_random_uuid(),
    name VARCHAR(100),
    PRIMARY KEY(id)
);

CREATE TABLE genuses(
    id uuid DEFAULT gen_random_uuid(),
    name VARCHAR(100),
    PRIMARY KEY(id)
);

CREATE TABLE species(
    id uuid DEFAULT gen_random_uuid(),
    common_name VARCHAR(100),
    family uuid,
    genus uuid,
    PRIMARY KEY(id),
    CONSTRAINT fk_species
        FOREIGN KEY(family)
            REFERENCES families(id),
    CONSTRAINT fk_genus
        FOREIGN KEY(genus)
            REFERENCES genuses(id)
);

CREATE TABLE animal (
    id uuid DEFAULT gen_random_uuid(),
    name VARCHAR(100),
    species_id uuid,
    PRIMARY KEY(id),
    CONSTRAINT fk_species
        FOREIGN KEY(species_id)
            REFERENCES species(id)
);

CREATE TABLE animal_location (
    id uuid DEFAULT gen_random_uuid(),
    animal_id uuid,
    latitude BIGINT,
    longitude BIGINT,
    PRIMARY KEY(id),
    CONSTRAINT fk_animal
        FOREIGN KEY(animal_id)
            REFERENCES animal(id)
);
Enter fullscreen mode Exit fullscreen mode

And as a data example, we will register one animal by the name of piquinho. Piquinho is simply the name of travelling albatross that is travelling around the world, which has a sensor attached to it and we are reding the data that the sensor is sending to use. There are two tables that define species. That is the species and the genus that define species. These are tables families and genuses. The table species defines the species that animal belongs to. Finally we define an animal in the table of the same name where the species and the name of the animal gets registered. The database looks like this:

Image description

In order to build, create the images and start our project we can run the following commands which are available in the Makefile:

make
make create-and-push-images
make k8s-apply-deployment
Enter fullscreen mode Exit fullscreen mode

The first make is just a gradle build command. The second command used the variable:

MODULE_TAGS := aggregator \
               collector \
               listener \
               management \
               database
Enter fullscreen mode Exit fullscreen mode

to run:

docker images "*/*wlsm*" --format '{{.Repository}}' | xargs -I {}  docker rmi {}
@for tag in $(MODULE_TAGS); do \
    export CURRENT=$(shell pwd); \
    echo "Building Image $$image..."; \
    cd "wlsm-"$$tag"-service"; \
    docker build . --tag localhost:5001/"wlsm-"$$tag"-service"; \
    docker push localhost:5001/"wlsm-"$$tag"-service"; \
    cd $$CURRENT; \
done
Enter fullscreen mode Exit fullscreen mode

This simply goes through every module, and uses a standard generic command that changes per value given in the MODULE_TAGS to create the images and push them to the local registry on port 5001. Following the same strategy, we can then use the third command to deploy our pods. This third command uses a different loop that looks like this:

@for tag in $(MODULE_TAGS); do \
    export CURRENT=$(shell pwd); \
    echo "Applying File $$tag..."; \
    cd "wlsm-"$$tag"-service"; \
    kubectl apply -f $$tag-deployment.yaml --force; \
    cd $$CURRENT; \
done
Enter fullscreen mode Exit fullscreen mode

In this case, it applies every deployment script to every single one of the services. If we run the command kubectl get pods --all-namespaces, we should be getting this output:

NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-76f75df574-dmt5m                     1/1     Running   0          5m21s
kube-system          coredns-76f75df574-jtrfr                     1/1     Running   0          5m21s
kube-system          etcd-kind-control-plane                      1/1     Running   0          5m38s
kube-system          kindnet-7frts                                1/1     Running   0          5m21s
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          5m36s
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          5m36s
kube-system          kube-proxy-njzvl                             1/1     Running   0          5m21s
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          5m36s
kuma-system          kuma-control-plane-5f47fdb4c6-7sqmp          1/1     Running   0          17s
local-path-storage   local-path-provisioner-7577fdbbfb-5qnxr      1/1     Running   0          5m21s
wlsm-namespace       wlsm-aggregator-64fc4599b-hg9qw              1/1     Running   0          4m23s
wlsm-namespace       wlsm-collector-5d44b54dbc-swf84              1/1     Running   0          4m23s
wlsm-namespace       wlsm-database-666d794c87-pslzp               1/1     Running   0          4m22s
wlsm-namespace       wlsm-listener-7bfbcf799-f44f5                1/1     Running   0          4m23s
wlsm-namespace       wlsm-management-748cf7b48f-8cjh9             1/1     Running   0          4m23s
Enter fullscreen mode Exit fullscreen mode

What we should observe here at this point is the presence of the kuma-control-plane, the kube-controller-manager and all the services running in our own custom wlsm-namespace. Our cluster is isolated from the outside, and in order to be able to access the different ports, we need to create port-forwarding for every pod we want to access. For that we can issue these commands in separate tabs:

We can also have a look at this by looking at k9s:

Image description

kubectl port-forward svc/wlsm-collector-deployment -n wlsm-namespace 8081:8081
kubectl port-forward svc/wlsm-listener-deployment -n wlsm-namespace 8080:8080
kubectl port-forward svc/wlsm-database-deployment -n wlsm-namespace 5432:5432
kubectl port-forward svc/kuma-control-plane -n kuma-system 5681:5681
Enter fullscreen mode Exit fullscreen mode

VI. Running the application

In order to run the application, we should open all the ports and when all of them are open we should see something like this in our screens

Image description

We can connect to the database using localhost and port 5432. The connection string is this one: jdbc:postgresql://localhost:5432/wlsm. And to access it we then use the username/password combination of admin/admin.

The first thing we need to do, before we perform any test is to know the id of Piquinho and we can do that by using Intellij database tools like this:

Image description

In the root folder of the project, there is a file called test-requests.http. This is a scratch file to create REST requests against our open ports:

###
GET http://localhost:8080/app/v1/listener/info

###
POST http://localhost:8080/app/v1/listener/create
Content-Type: application/json

{
  "animalId": "2ffc17b7-1956-4105-845f-b10a766789da",
  "latitude": 52505252,
  "longitude": 2869152
}

###
POST http://localhost:8081/api/v1/collector/animals
Content-Type: application/json

{
  "animalId": "2ffc17b7-1956-4105-845f-b10a766789da",
  "latitude": 52505252,
  "longitude": 2869152
}
Enter fullscreen mode Exit fullscreen mode

In order to be able to use this file, we only need to replace the id in this example from 2ffc17b7-1956-4105-845f-b10a766789da to d5ad0824-71c0-4786-a04a-ac2b9a032da4. In this case, we can make requests from the collector, or from the listener. Both requests should work and we should see afterward this kind of response per request:

{
  "animalId": "d5ad0824-71c0-4786-a04a-ac2b9a032da4",
  "latitude": 52505252,
  "longitude": 2869152
}
Response file saved.
> 2024-04-12T001024.200.json

Response code: 200 (OK); Time: 7460ms (7 s 460 ms); Content length: 91 bytes (91 B)
Enter fullscreen mode Exit fullscreen mode

Because both ports are opened and they at this point share the same payload type, we can perform the same requests to the listener and to the collector. After making those two requests we should find results in the table animal_locations:

Image description

So this confirms only that the cluster is running correctly and now we are ready to test policies with our Kuma mesh.

VII. MeshTrafficPermission - Part I

The MeshTrafficPermission is one of the features we can choose in Kuma, and it is probably the most used one.

But first let's take a moment to explore the kuma control plane. With all the porforwarding on, we can just go to localhost:5681/gui and visualize our kuma meshes. In the main page we should see something like this:

Image description

There is nothing much to see at the moment, but let's now apply the MeshTrafficPermission:

echo "apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
  namespace: kuma-system
  name: mtp
spec:
  targetRef:
    kind: Mesh
  from:
    - targetRef:
        kind: Mesh
      default:
        action: Allow" | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

Once we apply this, we should be getting a response like this: meshtrafficpermission.kuma.io/mtp created.

VIII. Mesh

Applying the mesh only doesn't change much when it comes to the setup of our cluster. What it does do is allow us to set up traffic routing policies. There are many things that we can choose from, but one of the most obvious thing we can choose from is mTLS otherwise referred to as mutual TLS, which in very short terms means that certificates are mutually accepted and validated in order to establish identity between parties and establish encrypted data traffic. This can be automatically done for us using this simple Mesh configuration:

echo "apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
  name: default
spec:
  mtls:
    enabledBackend: ca-1
    backends:
    - name: ca-1
      type: builtin" | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

After applying this policy we may come across a warning like this one:

Warning: resource meshes/default is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

For now, we can ignore this warning.

IX MeshTrafficPermission - Part II

Now comes the fun part and the first thing we are going to do is to disable all traffic between all pods:

echo "
apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
  namespace: wlsm-namespace
  name: mtp
spec:
  targetRef:
    kind: Mesh
  from:
    - targetRef:
        kind: Mesh
      default:
        action: Deny" | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

And after we get the confimation message meshtrafficpermission.kuma.io/mtp configured, if we try to make any request using any of the port-forwarding, we'll get:

HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 133

{
  "timestamp": "2024-04-12T07:09:26.718+00:00",
  "path": "/create",
  "status": 500,
  "error": "Internal Server Error",
  "requestId": "720749ce-56"
}
Response file saved.
> 2024-04-12T090926.500.json

Response code: 500 (Internal Server Error); Time: 10ms (10 ms); Content length: 133 bytes (133 B)
Enter fullscreen mode Exit fullscreen mode

This means that all traffic between pods is being denied. What we now have is an internal system protected against possible bad actors within our organization, but we have also now blocked traffic between all pods. So mTLS is a great thing, but blocking all traffic not at all. The way to make this perfect is simply to make exceptions to that DENY all rule and to do that we need a policy that will allow traffic between, the listener and the collector and the collector and the database. Let's start with the traffic between the collector and the database:

echo "
apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
  namespace: kuma-system
  name: wlsm-database
spec:       
  targetRef:
    kind: MeshService
    name: wlsm-database-deployment_wlsm-namespace_svc_5432
  from:
    - targetRef:
        kind: MeshService
        name: wlsm-collector-deployment_wlsm-namespace_svc_8081
      default:
        action: Allow" | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

In this case, what we are doing is allowing data traffic to flow from the targetRef collector to the targetRef database. If you don't know this, perhaps it is important to note how kuma interprets the name, which, just like the hostname creation, it is also used for functional purposes. The generic way to build these names is like this:

<service name>_<namespace>_svc_<service port>

In this case, the separator is an underscore, and creating a name, this way, lets Kuma know exactly what is permitted. In this case if we apply this policy, we'll be able to send requests to the collector after getting this response: meshtrafficpermission.kuma.io/wlsm-database created. And when making them, the response should now be 200 confirming that the location record has been sent to the collector:

POST http://localhost:8081/api/v1/collector/animals

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 91

{
  "animalId": "a3a1bc1c-f284-4876-a84f-f75184b6998f",
  "latitude": 52505252,
  "longitude": 2869152
}
Response file saved.
> 2024-04-12T091754.200.json

Response code: 200 (OK); Time: 1732ms (1 s 732 ms); Content length: 91 bytes (91 B)
Enter fullscreen mode Exit fullscreen mode

However, we still didn't define exceptions to the traffic between the listener and the collector and so making a request that way will result in this:

HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 133

{
  "timestamp": "2024-04-12T07:18:54.149+00:00",
  "path": "/create",
  "status": 500,
  "error": "Internal Server Error",
  "requestId": "e8973d33-62"
}
Response file saved.
> 2024-04-12T091854-1.500.json

Response code: 500 (Internal Server Error); Time: 10ms (10 ms); Content length: 133 bytes (133 B)
Enter fullscreen mode Exit fullscreen mode

And this is of course expected. Let's now apply another policy for this data traffic:

echo "
apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
  namespace: kuma-system
  name: wlsm-collector
spec:       
  targetRef:
    kind: MeshService
    name: wlsm-collector-deployment_wlsm-namespace_svc_8081
  from:
    - targetRef:
        kind: MeshService
        name: wlsm-listener-deployment_wlsm-namespace_svc_8080
      default:
        action: Allow" | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

Making it possible to now perform requests from the listener to the collector:

POST http://localhost:8080/app/v1/listener/create

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 91

{
  "animalId": "a3a1bc1c-f284-4876-a84f-f75184b6998f",
  "latitude": 52505252,
  "longitude": 2869152
}
Response file saved.
> 2024-04-12T092039-2.200.json

Response code: 200 (OK); Time: 14ms (14 ms); Content length: 91 bytes (91 B)
Enter fullscreen mode Exit fullscreen mode

X - MeshFaultInjection

Finally and just to provide another feature as an example, we can also use another feature called MeshFaultInjection, which can be very useful when performing tests with Kuma. We can simulate potential problems within our mesh and check if the error handling is being done correctly for example. We can also check other things like how possible circuit breakers we may have configured may react to faulty connections or high rate requests.

So let's try it. One way to apply MeshFaultInjection is like this:

echo "
apiVersion: kuma.io/v1alpha1
kind: MeshFaultInjection
metadata:
  name: default
  namespace: kuma-system
  labels:
    kuma.io/mesh: default
spec:
  targetRef:
    kind: MeshService
    name: wlsm-collector-deployment_wlsm-namespace_svc_8081
  from:
    - targetRef:
        kind: MeshService
        name: wlsm-listener-deployment_wlsm-namespace_svc_8080
      default:
        http:
          - abort:
              httpStatus: 500
              percentage: 50"  | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

With this policy, we are saying that the traffic outbound from the listener and inboud to the collector will have a 50% chance of success. The request results are unpredictable and so after the applying this policy, we may expect errors or successful requests to the listener endpoint.

POST http://localhost:8080/app/v1/listener/create

HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 133

{
  "timestamp": "2024-04-12T07:28:00.008+00:00",
  "path": "/create",
  "status": 500,
  "error": "Internal Server Error",
  "requestId": "2206f29e-78"
}
Response file saved.
> 2024-04-12T092800.500.json

Response code: 500 (Internal Server Error); Time: 8ms (8 ms); Content length: 133 bytes (133 B)
Enter fullscreen mode Exit fullscreen mode
POST http://localhost:8080/app/v1/listener/create

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 91

{
  "animalId": "a3a1bc1c-f284-4876-a84f-f75184b6998f",
  "latitude": 52505252,
  "longitude": 2869152
}
Response file saved.
> 2024-04-12T092819.200.json

Response code: 200 (OK); Time: 13ms (13 ms); Content length: 91 bytes (91 B)
Enter fullscreen mode Exit fullscreen mode

Finally just out of interest, we can have a look at how our animal_location table looks like now:

Image description

XI - Conclusion

I hope you were able to follow this article so far and that you were able to have cluster running on your machine. Thanks anyway for reading this article and for giving up a bit of your time to understand an learn a bit more about Kuma. I personally se a great usage for this and a great future for Kuma as it makes it possible to configure and take a much more granular control of our network and our environment. It's enterprise version, Kong-Mesh seems quite complete. Kuma is opensource and its great for testing and also for enterprise it seems. I find the subject of meshes very interesting and I think Kuma provides a great way to learn about how meshes work and to get a feel on how can we better control the data flow withing our network.

If we want to see the status of our services, we can just go to our Kuma control plane in this localhost location: http://localhost:5681/gui/meshes/default/services?page=1&size=50:

Image description

In Kuma control plane we can also have a look at the policies installed, checkout the status of our pods, monitor what is going on in the background and in generally just have an overview of what is happening in our mesh and how it is configured. I invite you to just go through the application and see if you can check the statues of the policies we have installed. The Kuma control plane a.k.a. the GUI, is made precisely to be easy to understand and follow up on our Mesh.

XII - Resources

I have also made a video about it on my JESPROTECH YouTube channel right over here:

💖 💪 🙅 🚩
jofisaes
João Esperancinha

Posted on April 12, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related