Creating Tomcat Threadpools for better throughput

sumateja

Sumateja K

Posted on June 25, 2024

Creating Tomcat Threadpools for better throughput

We faced an issue in a front facing Java tomcat application in production. This application receives traffic from a Admin UI REST calls as well as other external customers calling these REST endpoints as well.

The Problem

There were two kinds of requests say GET based calls & POST calls. The problem was that non critical GET based calls were taking longer time were blocking the server and causing a Timeouts for the application. So we now wanted a way to separate the transactions based upon the URL & Request method and separate the execution so that latency of slow transactions do not impact the critical transactions.

Image description

Solution

We decided identify and separate the critical transactions in nginx first. Then we created two separated Executors in tomcat which are exposed by separate connectors in tomcat. What this enabled us to do was to redirect critical traffic to one Executor and non critical to another Executor. This enables us to have different value of acceptorThreadCount for each connector. As well as have control over the Executor threads by having different values of minThreads & maxThreads.This changes are configuration changes only and do not warrant any changes in code.

Lets discuss the implementation through a small sample application.

nginx.conf change



events {}

http{

upstream front_upstream_critical{
    server tomcat:8080;
    }

upstream front_upstream_non_critical {
 server tomcat:8081;
 }

map $request_method $upstream {
        default front_upstream_non_critical;
        POST front_upstream_critical;
    }

 server {

    listen 9090;
     location ~ ^/front-application/api/v1/myresource/(critical_path1|critical_path2|critical_path3)$ {
                   proxy_pass_request_body on;
                   proxy_pass_request_headers on;
                   proxy_set_header Host $host:8080;
                   proxy_pass http://$upstream$uri$is_args$args;
                   proxy_http_version 1.1;
                   proxy_set_header Connection "";
            }
     location ~* /.* {
                   proxy_pass_request_body on;
                   proxy_pass_request_headers on;
                   proxy_set_header Host $host:8081;
                   proxy_pass http://front_upstream_non_critical;
                   proxy_http_version 1.1;
                   proxy_set_header Connection "";
            }
}
} 


Enter fullscreen mode Exit fullscreen mode

Once we have done splitting the two different kind of URLs we shall make changes in tomcat server.xml for adding the Executors & Connectors. Notice that we have added a port 8081 for our application for the new connector we are going to add.

Tomcat server change



<Server port="8005" shutdown="SHUTDOWN">
    <Service name="Catalina">
        <Executor name="criticalExecGroup1" namePrefix="criticalExecGroup-" maxThreads="50"
                  minSpareThreads="10"/>
        <Executor name="nonCriticalExecGroup2" namePrefix="nonCriticalExecGroup-" maxThreads="50"
                  minSpareThreads="10"/>


        <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000"
                   redirectPort="8443" executor="criticalExecGroup1">
        </Connector>

        <Connector port="8081" protocol="HTTP/1.1" connectionTimeout="20000"
                   redirectPort="8443" executor="nonCriticalExecGroup2">
        </Connector>


        <Engine name="Catalina" defaultHost="localhost">
            <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" />
        </Engine>
    </Service>
</Server>


Enter fullscreen mode Exit fullscreen mode

Sample Docker compose file along with above changes



version: '3.8'

services:
    tomcat:
        image: tomcat:9.0.63
        ports:
            - "8080:8080"
            - "8081:8081"
        volumes:
            - ./webapps:/usr/local/tomcat/webapps
            - ./conf/server.xml:/usr/local/tomcat/conf/server.xml
        networks:
            - app-network
    nginx:
        image: nginx:latest
        ports:
            - "9090:9090"
        volumes:
            - ./nginx/nginx.conf:/etc/nginx/nginx.conf
        networks:
            - app-network
networks:
    app-network:
        driver: bridge



Enter fullscreen mode Exit fullscreen mode

Notice that we are overriding server.xml in tomcat container & nginx.conf in nginx container and also opening the extra port which we specified in server.xml under ports with above changes.

With this we use the same front-application now we are able to segregate the execution in such a way that the non critical slow transactions do not block tomcat threads and does not impact the critical transactions traffic latency.

Running this with a test API project with above docker compose gives below results.

The prefix specified in the server.xml is printed below along with the thread number.



2024-06-25 17:05:15 25-Jun-2024 11:35:15.452 INFO [main] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
2024-06-25 17:06:08 25-Jun-2024 11:36:07.999 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat/webapps/group-2.6.4.war] has finished in [103,096] ms
2024-06-25 17:06:08 25-Jun-2024 11:36:08.025 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
2024-06-25 17:06:08 25-Jun-2024 11:36:08.038 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8081"]
2024-06-25 17:06:08 25-Jun-2024 11:36:08.040 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [103243] milliseconds
2024-06-25 17:06:09 25-Jun-2024 11:36:09.079 INFO [nonCriticalExecGroup-1] com.tomcat.group.GroupApplication$AttachmentsNonMTController.endpoint2 AttachmentsNonMTController Controller - Thread:nonCriticalExecGroup-1
2024-06-25 17:06:09 25-Jun-2024 11:36:09.079 INFO [nonCriticalExecGroup-2] com.tomcat.group.GroupApplication$AttachmentsNonMTController.endpoint2 AttachmentsNonMTController Controller - Thread:nonCriticalExecGroup-2


Enter fullscreen mode Exit fullscreen mode


2024-06-25 17:07:59 25-Jun-2024 11:37:59.146 INFO [criticalExecGroup-3] com.tomcat.group.GroupApplication$MessageRequestsMTController.getMT MessageRequestsMTController Controller - Thread:criticalExecGroup-3
2024-06-25 17:10:17 25-Jun-2024 11:40:17.551 INFO [criticalExecGroup-4] com.tomcat.group.GroupApplication$MessageRequestsMTController.getMT MessageRequestsMTController Controller - Thread:criticalExecGroup-4
2024-06-25 17:10:18 25-Jun-2024 11:40:18.801 INFO [criticalExecGroup-5] com.tomcat.group.GroupApplication$MessageRequestsMTController.getMT MessageRequestsMTController Controller - Thread:criticalExecGroup-5
2024-06-25 17:10:19 25-Jun-2024 11:40:19.428 INFO [criticalExecGroup-6] com.tomcat.group.GroupApplication$MessageRequestsMTController.getMT MessageRequestsMTController Controller - Thread:criticalExecGroup-6




Enter fullscreen mode Exit fullscreen mode

This kind of configuration of tomcat can be used where we would like to divide the execution acroos different threadpools.

💖 💪 🙅 🚩
sumateja
Sumateja K

Posted on June 25, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related