Java Microservice Deployment to Kubernetes
Bruno
Posted on March 18, 2021
The early attraction of Java was its promise of “write-once, run anywhere.” In theory, this portability should allow a developer to write code that will run unmodified on any platform.
However, we now see the ecosystem moving towards cloud-native technologies such as containers, and teams are building applications that are more modular and distributed to make them easier to scale. This changes the approach to developing, deploying, and expose applications quite a bit.
Thus, the scenario we currently see is developers trying to learn how to deploy their Java applications using containers and Kubernetes as the container management platform with the goal of achieving things such as increased resilience, scalability, and more. Still, as they get started in their microservices journey, they realize it is not as easy as expected.
When deploying a Java application using a microservices architecture on Kubernetes, they learn that it is no longer only about their application code. Now, to deploy and give users access to their applications, they need to understand concepts such as replication controllers, pods, services, load balancing, and more, which previously were seen as infrastructure-related components and handled by the System Admins. This adds complexity and delays to the application delivery process and negatively impacts the developer experience.
In the end, developers should focus on what matters the most, their application code, while automating infrastructure-related tasks.
Let's see how we can deploy a sample Java microservice application called WildFly
If you were to deploy this application on Kubernetes you would need to:
- Create a replication controller config
- Create a service for your application
- Create and configure load balancing so users can access your application
- Create namespace
- And more
Creating the above can be time-consuming. Managing these configurations as your application changes can introduce additional complexities such as understanding how to choose between NodePort vs. ClusterIP, for example, and more.
So let's look at an alternative approach, using Ketch
Requirements:
- A Kubernetes cluster
- Ketch installed and available through your CLI. For more info on installing Ketch, please visit Getting Started with Ketch
With Ketch already installed, the first step is to create a pool, which translates to a namespace in Kubernetes, where you will deploy your WildFly application. You can do it using the command below:
ketch pool add development --ingress-service-endpoint 35.197.96.152 --ingress-type istio
- Keep in mind that you will need to update your ingress service endpoint IP with the one from your cluster, which you can find by running the command below:
kubectl get services -n istio-system
You can see your pool was successfully created by running the command below:
ketch pool list
Now, you create your application where the WildFly application image will be deployed next:
ketch app create wildfly --pool development
The last step is for you to deploy the application image:
ketch app deploy wildfly -i docker.io/jboss/wildfly:latest
You can see detailed information of your application status and the URL that was automatically assigned to it using the command below:
ketch app list
If you navigate to the URL presented, we can then see the web UI for the application
Using Ketch, you saw that instead of spending a large amount of time learning the different Kubernetes concepts, you can deploy your Java Microservices application using three simple commands.
Ketch eliminates application deployment complexities, improves the developer experience and application delivery speed.
Tray Ketch today, and join one of the fastest-growing open-source projects in the cloud-native space!
Resources
Posted on March 18, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.