Exploring Experimental Feature Updates in Xahaud: A Performance Analysis

dangell7

Denis Angell

Posted on October 21, 2024

Exploring Experimental Feature Updates in Xahaud: A Performance Analysis

In the ever-evolving landscape of blockchain technology, optimizing transaction throughput while maintaining system stability is a constant challenge. Xahaud is at the forefront of this endeavor, and today, we delve into two experimental feature updates aimed at enhancing transaction processing capabilities. These updates involve changes to the max_transactions and target_txn_in_ledger settings. Let's explore the impact of these changes on network performance and resource utilization.

Experimental Feature Updates

1. max_transactions Setting

The max_transactions setting, which determines the maximum number of transactions allowed in each ledger, has been increased from 250 to 1000. This change is designed to accommodate a higher volume of transactions per ledger, potentially increasing the overall throughput of the network.

2. target_txn_in_ledger Setting

Similarly, the target_txn_in_ledger setting, which specifies the target number of transactions per ledger, has been raised from 256 to 1000. This adjustment aims to align the target with the increased capacity, ensuring that the network can efficiently handle larger transaction batches.

Testing Environment

To evaluate the impact of these changes, we conducted tests using a network configuration consisting of six validator nodes and two submission nodes. Each node operates on its own Virtual Private Server (VPS) and is provisioned through Ansible. Additionally, each node is equipped with a lightweight server and listener to monitor ledger statistics, including memory, disk, I/O, and CPU usage.

The load testing involved sending XRP-XRP Payment (No Hook) transactions to both submission nodes using a script. We employed settings such as BATCH_COUNT and SEEDS_IN_GROUP to distribute the batch into groups. To ensure transactions were included in the ledger, we utilized the fee endpoint to adjust transaction fees based on the presence of a ledger queue.

Server Specifications

  • Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz (2x)
  • Proxmox virtualisation
  • 12 cores each, "HOST" mode (so native architecture)
  • 48GB mem each
  • Every VM is a dedicated SAS SSD storage device in "Unsafe" mode (write back)

Performance Analysis

Metric Before After Change
TPS (avg) 161 260 🔼 61.49%
TPL (avg) 643 1039 🔼 61.59%
Secs (avg) 3.56 4.11 🔼 15.45%
Avg CPU (%) 16.20 20.49 🔼 26.48%
Avg Memory (GB) 1.49 2.41 🔼 61.74%

We conducted a series of performance tests to thoroughly understand the impact of the experimental feature updates. These tests were designed to measure key metrics such as transaction throughput, ledger processing efficiency, and resource utilization across the network. By comparing the network's performance with and without the experimental settings, we aimed to identify any significant changes and assess the trade-offs involved.

Baseline Performance (Without Experimental Settings)

  • Total Transactions: 31,000
  • Total Ledgers: 49
  • Transactions Per Second (TPS): 161
  • Transactions Per Ledger: 643
  • Average Close Time: 3.56 seconds

Image description

Node Resource Utilization:

  • Max CPU Usage: 24%
  • Average CPU Usage: 16%
  • Max Memory Used: 1.73 GB
  • Average Memory Used: 1.49 GB

Image description

Performance with Experimental Settings

  • Total Transactions: 39,000
  • Total Ledgers: 38
  • Transactions Per Second (TPS): 260
  • Transactions Per Ledger: 1039
  • Average Close Time: 4.11 seconds

Image description

Node Resource Utilization:

  • Max CPU Usage: 24%
  • Average CPU Usage: 20%
  • Max Memory Used: 2.62 GB
  • Average Memory Used: 2.41 GB

Image description

Management Summary:

The recent settings change has resulted in significant performance improvements for our system, albeit with increased resource utilization. Here are the key findings:

Transaction Processing Speed: We've seen a substantial increase in both Transactions Per Second (TPS) and Transactions Per Ledger (TPL), with both metrics improving by over 61%. This translates to a much higher throughput capacity for our system.
Processing Time: The average processing time per transaction (Secs) has increased by 15.45%. While this is a slight degradation, it's outweighed by the massive gains in overall throughput.

Resource Utilization: The new settings require more computational resources. We observed a 26.48% increase in average CPU usage and a 61.74% increase in average memory consumption. This higher resource utilization directly contributes to the improved performance.
Stability: The transaction submission graph in the "after" scenario shows a more consistent pattern, suggesting improved stability in transaction processing.
Node Performance: Individual node performance graphs indicate more uniform CPU usage across nodes after the change, which could contribute to better load distribution and overall system reliability.

In conclusion, the new settings have dramatically improved our system's transaction processing capabilities, with TPS and TPL both increasing by over 61%. This comes at the cost of increased resource utilization, but the trade-off appears favorable given the magnitude of performance improvement. The slight increase in per-transaction processing time is a minor concern that's overshadowed by the overall throughput gains.

Recommendation: Given the substantial performance improvements, we should consider maintaining these new settings. However, we should also closely monitor resource usage to ensure we have adequate capacity to sustain this higher performance level during peak loads. Additionally, we may want to investigate if further optimizations can reduce the slight increase in per-transaction processing time without sacrificing the throughput gains.

💖 💪 🙅 🚩
dangell7
Denis Angell

Posted on October 21, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related