Interactive Java JVM Monitoring on Kubernetes: Exposing JMXMP
Jean-François Lamy
Posted on May 21, 2020
In addition to log-based or agent-based monitoring, it is occasionally necessary to inspect interactively the behavior of a Java application, in particular its memory usage or CPU usage. In the Java ecosystem, the free VisualVM or jconsole tools are often used. These tools also allow seeing the thread usage and drill down/filter interactively.
This post explains the steps necessary to manually inspect a Java application using the JMXMP protocol in a Kubernetes cluster.
Part 1 is about making the application to be monitored visible using JMXMP
Part 2 discusses how to add VisualVM to the mix (similar techniques apply to other monitoring tools)
What about RMI?
Exposing the default RMI protocol is extremely painful because of the way it handles ports and requires a back channel. The general consensus is don't bother. After wasting a couple days, I wholeheartedly agree.
Enabling JMXMP monitoring
The simplest way to enable monitoring is to create the monitoring port ourselves. For example, in a web frontend, add the following to open port 1098 for JMXMP monitoring:
try {
// Get the MBean server for monitoring/controlling the JVM
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
// Create a JMXMP connector server
JMXServiceURL url = new JMXServiceURL("jmxmp", "localhost", 1098);
JMXConnectorServer cs = JMXConnectorServerFactory.newJMXConnectorServer(url,
null, mbs);
cs.start();
} catch (Exception e) {
e.printStackTrace();
}
In order for this code to compile, you will need to include a JMXMP implementation. If using Maven, add the following to your pom.xml
<!-- https://mvnrepository.com/artifact/org.glassfish.external/opendmk_jmxremote_optional_jar -->
<dependency>
<groupId>org.glassfish.external</groupId>
<artifactId>opendmk_jmxremote_optional_jar</artifactId>
<version>1.0-b01-ea</version>
</dependency>
In real life,
- you will likely use an environment variable so you can configure the port number using a ConfigMap or Secret, and perhaps not start the connector if there is no such variable present.
- You may wish to secure the connection. For my own purposes, I have used IP whitelisting of my development workstation in a k8s networking policy, so I have not explored this topic further. See https://interlok.adaptris.net/interlok-docs/advanced-jmx.html for examples on how to enable this.
Special considerations for building the container
KubeSail uses IPv6 by default. In the process of debugging this recipe, we have found it necessary to add the following Java flags to the Java entry point command of the Docker container.
-Djava.net.preferIPv4Stack=true
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
(Editor's note - We will be checking whether they are still necessary.)
Exposing a port in Kubernetes for Docker Desktop
The following instructions have been tested with the new WSL2 Docker backend which runs directly on Linux.
For Docker Desktop, the simplest way to expose a port is to use a LoadBalancer service. This avoids having to use kubectl on every session.
kind: Service
apiVersion: v1
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
name: http
port: 80
targetPort: 8080
- protocol: TCP
name: jmx
port: 1098
targetPort: 1098
type: LoadBalancer
This makes it possible to open the web application on http://localhost even though the container is communicating on port 8080, and also opens up the JMX monitoring channel on port 1098. You can change the port to suit your needs, but targetPort
needs to match what is used by the application being monitored (see the Java above).
Exposing a port with KubeSail Kubernetes
KubeSail is an affordable managed PaaS for running Kubernetes applications. By default, only the traffic coming in through an Ingress (NGINX) is allowed to go further. It is therefore necessary to
- create a Network Policy that allows traffic to flow to the monitored application container
- expose the port for monitoring
Creating a network policy
The following opens the 1098 port on the application to be accessed from the outside, and limits outside access to a certain address (obviously you would use your own).
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: myapp-allow-jmx
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 208.77.188.166/32
ports:
- protocol: TCP
port: 1098
More advanced topics are discussed on this page
https://www.magalix.com/blog/kubernetes-network-policies-101
Exposing the monitoring port
The simplest way is to use a NodePort, which makes the virtual machine open a port and expose it to the outside world. In the case of a shared service like KubeSail, there is therefore the possibility that the chosen port is in conflict with others, and that you will need to change it.
The last line (externalTrafficPolicy) is required to prevent Kubernetes from doing a NAT (Network Address Translation)
kind: Service
apiVersion: v1
metadata:
name: jmx
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 1098
targetPort: 1098
nodePort: 30098
type: NodePort
externalTrafficPolicy: Local
Getting the address of the node
- In order to the the address of the node, you may use
kubectl get pod -o wide
- You may also install the
jq
JSON query tool, and use the following one-liner
kubectl get pod -o json -l 'app=myapp' | jq -r '.items[0].spec.nodeName'
which will output the ip address (54.151.3.??) in the example
- In order to test that all is good, you can use the command.
telnet 54.151.3.?? 300??
You will get garbage back, but with a recognizable javax.management.remote.message.HandshakeBeginMessage
string at the beginning.
Posted on May 21, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.