Single-Threaded vs Multi-Threaded Servers: An Experiment with Node.js and Java

michinoins

Mikiya Ichino

Posted on May 18, 2023

Single-Threaded vs Multi-Threaded Servers: An Experiment with Node.js and Java

Introduction

As a fullstack developer who started my career with Java and SpringBoot, I've become familiar with how Apache servers handle multiple requests using a multi-threading mechanism.

Each request received by the server initiates a new process, allowing simultaneous handling of multiple requests.
However, this traditional approach led to a known issue called the C10K problem, a constraint that makes it challenging for servers to efficiently manage ten thousand concurrent connections.

In contrast, event-driven, single-threaded servers like Node.js have introduced a new way of handling high concurrent connections more effectively. In this article, I aim to compare the behavior of single-threaded (Node.js) and multi-threaded (Apache) servers when dealing with CPU-intensive requests under high load conditions.

Theoretical Expectations

Predicting the outcome, I anticipate that when a server receives a CPU-intensive request that takes a considerable amount of time to process, the single-threaded server will struggle more than a multi-threaded server. A multi-threaded server can process multiple CPU-intensive tasks concurrently, whereas a single-threaded server can only handle one task at a time.

Single-Threaded vs Multi-Threaded

Before we dive into the experiment, let's briefly distinguish between single-threaded and multi-threaded servers.

Single-threaded servers like Node.js operate on a single thread, executing one operation at a time. Node.js, being non-blocking and event-driven, can handle many concurrent connections with minimal overhead.

Conversely, multi-threaded servers like Java can execute multiple operations concurrently by initiating additional threads. This ability makes them more suitable for CPU-intensive tasks. However, the overhead associated with creating and managing threads can reduce efficiency when dealing with a large number of concurrent connections.

The Experiment

To illustrate the difference in performance between Node.js and Apache servers under high load, I've created simple servers using both languages.
These servers feature a single endpoint that calculates the 40th Fibonacci number, a CPU-intensive task.
We will use a load testing tool to simulate a high number of concurrent requests.

High Load Testing

High load testing is a form of performance testing that exposes a server or system to a workload nearing or exceeding its limit. It helps identify potential bottlenecks or performance issues under high stress conditions.

In our experiment, we'll use a benchmarking tool called Autocannon, capable of generating a high load by making numerous HTTP requests to a server.

The tools

autocannon

Autocannon is a Node.js-based benchmarking tool, supporting HTTP pipelining, HTTPS, and providing detailed performance statistics. Here's how to install Autocannon using npm:

npm install -g autocannon
Enter fullscreen mode Exit fullscreen mode

To simulate a high load, we'll run Autocannon as follows:

This command means "send as many requests as possible to localhost:8080/ for 10 seconds, using 50 concurrent connections"

autocannon -c 50 -d 10 http://localhost:3000/
Enter fullscreen mode Exit fullscreen mode

The number of request is depending on several factors , capacity of the network,response time of the server ,etc ...

Node.js Server

Here's the code for our Node.js server:

fibonacci.js 

const http = require('http');
const os = require('os');

function fibonacci(n) {
  if (n < 2)
    return 1;
  else
    return fibonacci(n-2) + fibonacci(n-1);
}

const server = http.createServer((req, res) => {
  const fibNumber = fibonacci(40); // This will take some time
  res.end(`Fibonacci result: ${fibNumber}`);
});

server.listen(3000, () => {
  console.log('Server listening on port 3000');
});
Enter fullscreen mode Exit fullscreen mode

In this script, we create an Express server with a single route /fibonacci that calculates the 40th Fibonacci number.

run the server

node fibonacci.js
Enter fullscreen mode Exit fullscreen mode

execute this command

autocannon -c 50 -d 10 http://localhost:3000/
Enter fullscreen mode Exit fullscreen mode

nodeResult

The result is
Sent 100 requests , but it causes 41 timeouts.

Apache Server
FibonacciController.java

Here's the corresponding Java code using Spring Boot:

package com.example.demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class FibonacciController {
    @GetMapping("/")
    public String calculateFibonacci() {
        int fibNumber = fibonacci(40);
        return "Fibonacci result: " + fibNumber;
    }

    private int fibonacci(int n) {
        if (n <= 1) return n;
        else return fibonacci(n-1) + fibonacci(n-2);
    }
}

Enter fullscreen mode Exit fullscreen mode

run the server

./gradlew bootrun
Enter fullscreen mode Exit fullscreen mode

execute this command

autocannon -c 50 -d 10 http://localhost:8080/
Enter fullscreen mode Exit fullscreen mode

JavaResult

The result is
Sent 162 requests , no timeouts.

Result

Here is the result of conducting this command for each server.

autocannon -c 50 -d 10 http://localhost:8080/
Enter fullscreen mode Exit fullscreen mode

The result(Node) is
100 requests , but it causes 41 timeouts.

The result(Apache) is
162 requests , no timeouts.

From these result, we can draw conclusions in terms of 3 aspects.

Concurrency Handling: The single-threaded Node.js server struggled to handle the concurrent requests as efficiently as the multi-threaded Apache server. This is indicated by the 41 timeouts out of 100 requests, meaning 41% of the requests were not fulfilled within the given timeframe. This is likely due to the blocking nature of the CPU-intensive Fibonacci calculation, which prevents Node.js from efficiently handling other incoming requests concurrently.

Throughput: The multi-threaded Apache server was able to handle a higher number of requests (162 requests) within the same timeframe, and without any timeouts. This demonstrates that for CPU-intensive tasks, a multi-threaded server like Java can potentially have higher throughput compared to a single-threaded server like Node.js.

Resiliency: The Apache server appeared to be more resilient under high load, given that it didn't experience any timeouts. This suggests that for applications where high resiliency under load is important (especially for CPU-bound tasks), a multi-threaded server might be a more appropriate choice.

Conclusion

This experiment provides a clear illustration of how single-threaded and multi-threaded servers handle high load differently. However, it's important to note that neither model is universally better than the other. Node.js, with its event-driven, non-blocking model, excels at handling many concurrent connections, making it a great choice for real-time applications. Java, with its multi-threading capabilities, is often better suited for CPU-intensive tasks.

When choosing a technology stack for a server application, consider the specific requirements of our project, the expertise of your development team, and the strengths and weaknesses of the available technologies.

💖 💪 🙅 🚩
michinoins
Mikiya Ichino

Posted on May 18, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related