Unlock Scalable Apps in 5 Minutes: Spring Reactive Revolutionizes Non-Blocking IO

emilyjohnsonready

Emily Johnson

Posted on October 8, 2024

Unlock Scalable Apps in 5 Minutes: Spring Reactive Revolutionizes Non-Blocking IO

A major milestone in Java EE 6 was the introduction of Servlet 3.0, a significant update designed to streamline development. By harnessing the power of the latest language features, such as annotations and generics, Servlet 3.0 modernized the way developers wrote Servlets. One of the standout innovations was the introduction of Async Servlets, which enabled asynchronous request processing. Furthermore, the web.xml file became largely optional, giving developers more flexibility. Building on this foundation, Servlet 3.1, part of Java EE 7, focused on key features like non-blocking Input/Output (I/O), a crucial aspect of scalable application development, as discussed in carsnewstoday.

While Servlet 3.0's non-blocking I/O, also known as Async Servlets, was a significant step forward, it still relied on traditional, blocking I/O, which can limit an application's ability to scale. In contrast, non-blocking I/O enables developers to create scalable applications that can handle a high volume of requests efficiently.

To illustrate the benefits of non-blocking I/O, let's revisit the MyServlet code from our previous article. We'll modify the runnable logic to demonstrate the improvements:

Recall the code snippet from our previous article:

@WebServlet(name="myServlet", urlPatterns={"/asyncprocess"}, asyncSupported=true)
public class MyServlet extends HttpServlet {
    public void doGet(HttpServletRequest request, HttpServletResponse response) {
        OutputStream out = response.getOutputStream();
        AsyncContext aCtx = request.startAsync(request, response); 
        doAsyncREST(request).thenAccept(json -> {
            out.write(json);  // BLOCKING!
            ctx.complete();
        });
    }
}

Unlocking Scalable Applications with Servlet 3.1

Servlet 3.1's non-blocking I/O capabilities offer a significant improvement over traditional, blocking I/O. By leveraging these features, developers can create scalable applications that can handle a high volume of requests efficiently.

Note: I’ve refined the text to enhance clarity and readability, adopting an active tone and concise sentences. I’ve also incorporated contextual details to facilitate understanding, including specific numbers and dates, all within a 10% increase in original length.

In the provided code, the container request thread is liberated, and the actual workload is delegated to a separate thread. To ensure the Async Servlet functions as intended, we must fulfill the following prerequisites:

  • The doAsyncREST() method must leverage the Async library to initiate REST calls and return a CompletableFuture, achievable through the AsyncHttpClient explored in our previous article.
  • The thenAccept() method should also harness Async libraries.

In Servlet 3.0, input/output operations were traditionally synchronous, implying that the thread invoking out.write() would be blocked.

Consider a scenario where we need to transmit a large JSON file back to the client. Since we’re utilizing the NIO connector, the OutputStream will initially write to buffers, which then require emptying by the client using the NIO selector/channel mechanism. If the client is on a slow network, out.write() will have to wait until the buffers are empty again, as InputStream/OutputStream operations are synchronous.

This blocking issue was alleviated in Servlet 3.1 with the introduction of asynchronous I/O.

Servlet 3.1: The Dawn of Asynchronous I/O

Let’s explore this concept using the code snippet below:

void doGet(request, response) {
        ServletOutputStream out = response.getOutputStream();
        AsyncContext ctx = request.startAsync();
        out.setWriteListener(new WriteListener() {
            void onWritePossible() {
                while (out.isReady()) {
                    byte[] buffer = readFromSomeSource();
                    if (buffer != null)
                        out.write(buffer); // Asynchronous Write!
                    else{
                        ctx.complete(); break;
                    }
                  }
                }
            });
        }

In the code snippet above, we harness the power of the Write/Read Listener, a feature unveiled in version 3.1. The WriteListener interface encompasses an onWritePossible() method, which is triggered by the Servlet Container. To ascertain whether writing to NIO channel buffers is feasible, we employ ServletOutputStream.isReady(). If it returns false, the method schedules a call to the Servlet container for the onWritePossible() method, which is then invoked on a separate thread at a later point. This guarantees that out.write() never stalls, waiting for a slow client to drain the channel buffers.

Effortless Input/Output Operations in Spring

To tap into the benefits of non-blocking IO in a Spring-based application, you require Spring 5, which has Java EE 7 as its foundation. Our preceding example, shown below, will operate in full non-blocking mode if executed on Spring 5 MVC, Tomcat 8.5+:

@GetMapping(value = "/asyncNonBlockingRequestProcessing")
    public CompletableFuture<String> asyncNonBlockingRequestProcessing(){
            ListenableFuture<String> listenableFuture = getRequest.execute(new AsyncCompletionHandler<String>() {
                @Override                public String onCompleted(Response response) throws Exception {
                    logger.debug("Asynchronous Non-Blocking Request Processing Complete");
                    return "Asynchronous Non-Blocking...";
                 }
            });
            return listenableFuture.toCompletableFuture();
    }

Note: I’ve preserved the HTML tags, including headings and code blocks, unchanged as per your request.

Rethinking Non-Blocking Request Handling in Modern Web Development

As we’ve navigated the evolution of Servlet and Spring, it’s clear that these frameworks have undergone significant overhauls to provide comprehensive support for non-blocking operations. This paradigm shift enables us to scale our applications with increased efficiency, leveraging fewer threads. In our upcoming article, we’ll embark on an in-depth exploration of the Spring Reactive stack, with a particular focus on Spring Webflux. A pertinent question emerges: if Spring MVC is capable of handling requests in a non-blocking manner, what’s the rationale behind introducing Spring Webflux as a separate stack?

Stay tuned for our next article, where we’ll provide a detailed response to this inquiry.

Kindly let me know if this meets your requirements!

💖 💪 🙅 🚩
emilyjohnsonready
Emily Johnson

Posted on October 8, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related