When working on applications, caching is crucial for enhancing performance by reducing the need for repeated database fetches. Among the various SQLite libraries available, Bun's native integration is optimized for performance and takes advantage of its non-blocking I/O capabilities.
The bun:sqlite module is roughly 3-6x faster than better-sqlite3 and 8-9x faster than deno.land/x/sqlite for read queries. Each driver was benchmarked against the Northwind Traders dataset. View and run the benchmark source. Source
In addition to efficient caching, serializing JavaScript objects can be slow when using JSON. This is why it makes sense to opt for efficient binary encoding alternatives like Msgpackr or CBOR. These formats are faster to parse, support complex data types, require less CPU usage, and store data more compactly, further enhancing overall application performance.
With this in mind, let's explore how to build a cache manager using bun:sqlite along with efficient binary encoding.
cache-manager provides a straightforward and intuitive API for caching, abstracting away the complexity of managing different cache stores and their configurations. With support for multiple stores, built-in expiration and TTL management, and robust error handling and fallback mechanisms, it ensures data integrity and freshness.
Additionally, cache-manager is highly customizable and extensible, allowing you to create custom cache stores tailored to your needs. This flexibility means you can set up and use caching with minimal code, allowing you to focus on your application's core logic.
The required interface to fulfill as a cache-manager store is as follows:
PRAGMA main.synchronous = NORMAL;: Ensures that SQLite writes are fast while still maintaining a reasonable level of data safety. It does not guarantee as much durability as FULL, but it is sufficient for many use cases.
PRAGMA main.journal_mode = WAL2;: Improves concurrency by allowing readers to access the database while a write operation is ongoing.
PRAGMA main.auto_vacuum = INCREMENTAL;: Allows SQLite to reclaim unused space incrementally, rather than all at once.
The val column stores the cached value in a binary large object (BLOB) format, allowing it to handle various data types depending on the chosen serializer.
expire_at is pre-calculated based on the ttl value in milliseconds, and val is pre-checked by a function isCacheable: (value: unknown) => boolean;.
3. Retrieving Cache Entries
SELECT*FROM${name}WHEREkey=?ANDexpire_at>?LIMIT1
Returns one record that has not expired.
Auto Purge and Batch Operations
1. purgeExpired
constpurgeExpired=async ()=>{constnow=Date.now();if (now-lastPurgeTime>=60*60*1000){conststatement=db.prepare(`DELETE FROM ${name} WHERE expire_at < ?`);statement.run(now);lastPurgeTime=now;}};
Keeps the cache clean and efficient by regularly removing stale entries, ensuring that the cache does not grow indefinitely and affect performance.
2. Batch Set (mset)
constmset=async (pairs:[string,unknown][],ttl?:number)=>{constttlValue=ttl!==undefined?ttl*1000:defaultTtl;if (ttlValue<0){return;}constexpireAt=Date.now()+ttlValue;conststmt=`INSERT OR REPLACE INTO ${name} (key, val, created_at, expire_at) VALUES ${pairs.map(()=>'(?, ?, ?, ?)').join(', ')}`;constbindings=pairs.flatMap(([key,value])=>{if (!isCacheable(value)){thrownewNoCacheableError(`"${value}" is not a cacheable value`);}return[key,serializerAdapter.serialize(value),Date.now(),expireAt];});conststatement=db.prepare(stmt);statement.run(...bindings);};
Improves efficiency by reducing the number of individual database operations. In the same way mget executes one single query to returns valid records with the query:
Bun's SQLite implementation combined with efficient binary encoding formats like Msgpackr provides a powerful solution for building fast and compact cache stores.
For the complete source code and implementation details, visit: