Default tablet splitting in YugabyteDB πŸš€

franckpachot

Franck Pachot

Posted on July 25, 2022

Default tablet splitting in YugabyteDB πŸš€

I have given some ideas about the recommended tablet size and number in Distributed SQL Sharding: How Many Tablets, and at What Size? You can define the initial number of tablets when creating a table or an index with SPLIT INTO, for hash-sharding, or SPLIT AT, for range sharding. But if you don't, the initial number of tablet depends on a few parameters.

yb-master or yb-tserver

The first parameter is --enable_automatic_tablet_splitting is set on yb-master which is running this auto-splitting task, and the others are set on yb-tserver (values are taken from the one you are connected to when creating the table).

yb-master parameter: auto-splitting or not

With auto-splitting enabled (--enable_automatic_tablet_splitting=true) all CREATE without SPLIT will start with one tablet per tserver. This let's YugabyteDB increase the number of tablets depending on their size. For large tables and indexes where you know the size in advance, you should split them manually at creation time to load faster. The others will start with one tablet and be split when growing.

Without auto-splitting enabled (--enable_automatic_tablet_splitting=false) the tables created without a SPLIT clause will have:

  • one tablet for range sharding (because the system cannot guess a split point before having data)
  • a system calculated number of tablets for hash sharding (this doesn't concern colocation or table groups which share a single tablet)

This post is about the last one (no SPLIT clause, no auto-split, no colocation) because the default number of tablet depends on many parameters, which API (YSQL or YCQL), how many servers, and the number of CPUs per server.

yb-tserver parameters

I'm writing this as of 2.15 and the code is here. Here is a visualization of the branches:
app.code2flow.com
https://app.code2flow.com/NyrSNBP9i5F3.png

You can see that it depends on many parameters, whether they are set (>0) or not, and I'll detail them later. If none are set, the number of tablets to create depends on:

  • number of servers in the cluster (tserver_count)
  • number of CPUs per server (yb_num_shard_per_server)

The yb_num_shard_per_server is related to the number of CPU because, if not explicitly set, it is calculated from it, so let's start with that.

Instance size: vCPU count

The code is in GetYCQLNumShardsPerTServer() and GetYSQLNumShardsPerTServer() and here is a summary:

YSQL tablets YCQL tablets
auto-split enabled 1 1
Thread Sanitizer 2 2
1 vCPU 2 4
2 vCPUs 2 4
3 vCPUs 4 8
4 vCPUs 4 8
5 vCPUs 8 8
6 vCPUs 8 8
7 vCPUs 8 8
8 vCPUs 8 8
>= 8 vCPUs 8 8

Basically, the default is 8 tablets per server, when servers have more than 4 vCPU, which is your case if you follow the best practices. For smaller environment (labs, dev, test) the number of tablets is lower, and even two times smaller for YSQL as each tablet is actually two RocksDB (regular for committed changes, and intents for provisional records)

The number of vCPU is detected by the system. However, this detection may be wrong with some containerization and you can set it with --num_cpus. For example, our Helm charts to install on Kubernetes set it accordingly to the resource declaration.

num_cpus has been introduced in version 2.2, defaults to zero (detection on the system), and is described as: Number of CPU cores used in calculations. On hyperthreaded systems, the most common on Intel processors, those are threads and not cores, and when virtualized they are vCPUs.

A default of 8 tablets per server may not be the best for all deployments. Your application may have 5 active tables, or hundreds of them with many secondary indexes. As described in the chart above, there are two parameters to set the number of tablets per table per server, or set the total number of tablets per table per cluster. Those two parameters have a setting for YSQL tables and for YCQL tables:

YSQL table YCQL table
tablets per table per server --ysql_num_shards_per_tserver --yb_num_shards_per_tserver
tablets per table per cluster --ysql_num_tablets --ycql_num_tablets

If the number of tablets per cluster is set, it has priority over the tablets per server. If SPLIT is mentioned in the CREATE statement, it has priority over those parameters. I'll give more details about those parameters.

YSQL

ysql_num_shards_per_tserver default was 8 in version 2.2 and then -1 since version 2.6 when the algorithm above (on the number of CPUs) was implemented. Its description is: The default number of shards per YSQL table per tablet server when a table is created. If the value is -1, the system sets the number of shards per tserver to 1 if enable_automatic_tablet_splitting is true, and otherwise automatically determines an appropriate value based on number of CPU cores.

ysql_num_tablets defaults to -1 so that ysql_num_shards_per_tserver described above is used. It was introduced in 2.7 and is described as The number of tablets per YSQL table. Default value is -1. If it's value is not set then the value of ysql_num_shards_per_tserver is used in conjunction with the number of tservers to determine the tablet count. If the user explicitly specifies a value of the tablet count in the Create Table DDL statement (split into x tablets syntax) then it takes precedence over the value of this flag. Needs to be set at tserver.

YCQL

I mentioned SPLIT for YSQL but the equivalent is WITH TABLETS= for YCQL. In the absence of it, and with auto-splitting off, here are the parameters used.

yb_num_shards_per_tserver defaults to -1 and is used for YCQL tables in the same ways as ysql_num_shards_per_tserver for YSQL. It is described as The default number of shards per table per tablet server when a table is created. If the value is -1, the system sets the number of shards per tserver to 1 if enable_automatic_tablet_splitting is true, and otherwise automatically determines an appropriate value based on number of CPU cores.

ycql_num_tablets defaults to -1and is used for YCQL tables in the same ways as ysql_num_tabletsfor YSQL. It is described as The number of tablets per YCQL table. Default value is -1. Colocated tables are not affected. If it's value is not set then the value of yb_num_shards_per_tserver is used in conjunction with the number of tservers to determine the tablet count. If the user explicitly specifies a value of the tablet count in the Create Table DDL statement (with tablets = x syntax) then it takes precedence over the value of this flag. Needs to be set at tserver.

Default

If you leave all defaults and no splitting clauses, a table, or index, will have at creation time

  • one tablet if auto-split (--enable_automatic_tablet_splitting) is on
  • up to (number of servers) * 8 tablets when auto-split is off. For example, creating 30 SQL tables with 2 secondary index each, on a cluster with 10 servers having 4 vCPUs will create 30*80 = 2400 tablets. If Replication Factor RF=3 (the minimum for HA) this means 7200 distributed on 10 servers = 720 tablets peers per server. Each used tablet peer will allocate a MemTable that can go up to 128MB (--memstore_size_mb). This can go up to 90GB (at maximum - it is allocated on usage). And you sill need free memory for cache.

The full documentation is https://docs.yugabyte.com/preview/architecture/docdb-sharding/tablet-splitting/#approaches-to-tablet-splitting. For the default value of enable_automatic_tablet_splitting please check it. Currently it is still disabled in stable version (2.14) but enabled in preview (2.15)

Note: in 2.18 the number of tablets doesn't depend on the number of tservers when all parameters are default and the number of vCPU is low: https://github.com/yugabyte/yugabyte-db/commit/3272f36005a58aee4b6b1bf395cf575e433279d6

πŸ’– πŸ’ͺ πŸ™… 🚩
franckpachot
Franck Pachot

Posted on July 25, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related