Caching Overview

kapiluthra

Kapil Uthra

Posted on August 24, 2021

Caching Overview

In computing, a cache is a high-speed data storage layer which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data.

Areas where caching can exist:

  • Client-Side - HTTP Cache Headers, Browsers

  • DNS - DNS Servers

  • Web - HTTP Cache Headers, CDNs, Reverse Proxies, Web Accelerators, Key/Value Stores

  • App - Key/Value data stores, Local caches

  • Database - Database buffers, Key/Value data stores

Caching Existence

Common thoughts while working with caching:

  • Is it safe to use a cached value? The same piece of data can have different consistency requirements in different contexts. For example, during online checkout, you need the authoritative price of an item, so caching might not be appropriate. On other pages, however, the price might be a few minutes out of date without a negative impact on users.

  • Is caching effective for that data? Some applications generate access patterns that are not suitable for caching—for example, sweeping through the key space of a large datasets that is changing frequently. In this case, keeping the cache up to date could offset any advantage caching could offer.

  • Is the data structured well for caching? Simply caching a database record can often be enough to offer significant performance advantages. However, other times, data is best cached in a format that combines multiple records together. Because caches are simple key-value stores, you might also need to cache a data record in multiple different formats, so you can access it by different attributes in the record.

Caching design patterns:

Lazy caching

  1. Your app receives a query for data, for example the top 10 most recent news stories.
  2. Your app checks the cache to see if the object is in cache.
  3. If so (a cache hit), the cached object is returned, and the call flow ends.
  4. If not (a cache miss), then the database is queried for the object. The cache is populated, and the object is returned.

Write-through

In a write-through cache, the cache is updated in real time when the database is updated. So, if a user updates his or her profile, the updated profile is also pushed into the cache. You can think of this as being proactive to avoid unnecessary cache misses, in the case that you have data that you absolutely know is going to be accessed. A good example is any type of aggregate the top 10 most popular news stories, or even recommendations. Because this data is typically updated by a specific piece of application or background job code, it's straightforward to update the cache as well.

Points to be considered

Always apply a time to live (TTL) to all of your cache keys, except those you are updating by write-through caching. You can use a long time, say hours or even days. This approach catches application bugs, where you forget to update or delete a given cache key when updating the underlying record. Eventually, the cache key will auto-expire and get refreshed.

For rapidly changing data rather than adding write-through caching or complex expiration logic, just set a short TTL of a few seconds. If you have a database query that is getting hammered in production, it's just a few lines of code to add a cache key with a 5 second TTL around the query. This code can be a wonderful Band-Aid to keep your application up and running while you evaluate more elegant solutions.

Evictions occur when memory is over filled or greater than max memory setting in the cache, resulting into the engine to select keys to evict in order to manage its memory. The keys that are chosen are based on the eviction policy that is selected.

  • allkeys-lfu: The cache evicts the least frequently used (LFU) keys regardless of TTL set
  • allkeys-lru: The cache evicts the least recently used (LRU) regardless of TTL set
  • volatile-lfu: The cache evicts the least frequently used (LFU) keys from those that have a TTL set
  • volatile-lru: The cache evicts the least recently used (LRU) from those that have a TTL set
  • volatile-ttl: The cache evicts the keys with shortest TTL set
  • volatile-random: The cache randomly evicts keys with a TTL set
  • allkeys-random: The cache randomly evicts keys regardless of TTL set
  • no-eviction: The cache doesn’t evict keys at all. This blocks future writes until memory frees up.

For basic caching use-cases, LRU-based rules are more prevalent, however depending on your objectives, you may choose to deploy a TTL or Random-based eviction policy if that better matches your needs. Higher Evictions usually implies that for cache cluster/node we might need a larger memory footprint.

Source: AWS Documentation

💖 💪 🙅 🚩
kapiluthra
Kapil Uthra

Posted on August 24, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related