GeneXus application running on Kubernetes
Sebasti谩n G贸mez
Posted on March 2, 2020
Some time ago I started playing with Docker. I felt in love with it, I've written some posts about it and I think every developer should take a look at Docker for both production and development environments.
After learning about Docker containers, the next thing you hear is Kubernetes. I didn't know why I would need it, but then I understood what's so great about it.
I believe its portability is its killer feature. If you're not familiar with Kubernetes, this is not the best post, to begin with. There are tons of tutorials and great docs online.
This is my story on how I managed to deploy a GeneXus web app with Kubernetes.
This is what I wanted to achieve:
My application needs access to a relational database. It works pretty well with MySQL so I'm planning to work on some MySQL-As-A-Service (in this case I'm using IBM's Compose for MySQL).
It does have some heavy processing, so in some cases, I might need to scale up (horizontally) to up to 3 nodes, or hopefully more. But that brings some challenges... I don't want to rely on the file system of the nodes, so I set a Storage Account (this time with Azure Storage) and I want to use that as the File System of my application.
The other problem I might hit has to do with the web sessions. My application makes heavy use of web sessions, and in a distributed environment I don't want to rely on server affinity.
So I thought about using Redis for caching and session managing. Setting up Redis for caching in GeneXus is easy, just turn on a property and that's it. For session manager is a little bit more complicated, but it is not hard, you will see it in a minute.
Also, since I'm using Docker containers, I don't need to know how to install Redis, I'll just pull an image from Docker Hub and that's it (in this case I'm using redis:5.0.7-alpine).
Step 0
Compile and run my application locally. Everything is fine? cool, keep going!
Step 1
Deploy the application to Docker. But here's where we need to change a few things outside of GeneXus.
For the (in this case java) application to use Redis as a Cache and Session manager, we need to modify a few things from the base image we will use, so this is the dockerfile I created to build my image.
# Dockerfile generated by GeneXus (Java)
FROM tomcat:9-jdk11
LABEL maintainer="seba <seba@example.com>"
WORKDIR /usr/local/tomcat/webapps/
RUN [ -d ROOT/ ] && mv ROOT/ ROOT.old/ || true
ADD ["ROOT.war", "/usr/local/tomcat/webapps/"]
ADD redis/*.jar /usr/local/tomcat/lib/
ADD redis/*.xml /usr/local/tomcat/conf/
ADD redis/redis-data-cache.properties /usr/local/tomcat/conf
The dockerfile is a standard (GeneXus generated) dockerfile until the first blank line. The last three commands are the ones I added myself. Those files added to the image are used by the Tomcat Clustering Redis Session Manager I've used.
After modifying the dockerfile I ran the following commands:
docker build -t k8stestjavaenvironment .
to build the image
docker tag k8stestjavaenvironment sebagomez/genexus-wwhero
to tag the image to something that I can push to a registry, and...
docker push sebagomez/genexus-wwhero
to push it
Now there's a Docker image in Docker's public registry with my application (and everything needed to run).
Step 2
Create the Kubernetes yaml file. I'm not trying to teach Kubernetes here but this is the file, and I'll tell you what the different sections are
apiVersion: v1
kind: Service
metadata:
name: gx-java-app
labels:
app: gx-java-app
spec:
ports:
- port: 8080
selector:
app: gx-java-app
tier: frontend
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gx-java-app
labels:
app: gx-java-app
spec:
replicas: 2
selector:
matchLabels:
app: gx-java-app
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: gx-java-app
tier: frontend
spec:
containers:
- image: sebagomez/genexus-wwhero
name: k8stest-genexus-java-app
env:
- name: GX_COM_K8STEST_DEFAULT_USER_ID
value: <MySQL User>
- name: GX_COM_K8STEST_DEFAULT_USER_PASSWORD
value: <MySQL Password>
- name: GX_COM_K8STEST_DEFAULT_DB_URL
value: jdbc:mysql://<MySQL Service>/K8sTest?useSSL=false
ports:
- containerPort: 8080
name: gx-java-app
---
apiVersion: v1
kind: Service
metadata:
name: gx-redis
labels:
app: genexus-java-app
spec:
ports:
- port: 6379
selector:
app: genexus-java-app
tier: redis
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gx-redis
labels:
app: genexus-java-app
spec:
selector:
matchLabels:
app: genexus-java-app
tier: redis
strategy:
type: Recreate
template:
metadata:
labels:
app: genexus-java-app
tier: redis
spec:
containers:
- image: redis:5.0.7-alpine
name: redis
ports:
- containerPort: 6379
name: redis
The first section is the service for the web app. When you deploy an app via Kubernetes it does not get exposed by default, you need to create a Kubernetes Service and that's what that first section is.
The second section is the deployment of my web application itself. Notice I'm using the image tagged before and I'm using environment variables inside the cluster for configuration.
Then I'm setting up another service for the Redis deployment. This service does not get exposed outside of the cluster, it'll be only used by my application.
And lastly, the redis container itself.
It sounds like a lot but it is quite easy, and easy to automate. Soon GeneXus will be able to generate that yaml based on your needs.
That allowed me to have my very own cluster on my machine. kubectl apply -f .\myapp.yaml
and the clusters starts.
But the great thing is, and this is what's great about Kubernetes (IMHO), I can take that exact yaml file to IBM's cloud and start a new cluster with my app. And I'm doing the same with Azure and AWS, so I built a Cloud Native application which is Cloud Provider agnostic, and I can take it wherever I want to.
Isn't that cool?!
Let me know your thoughts
Happy deploying!
Posted on March 2, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.