It was a nice project but had some flaws, mainly performance. Cloudflare Workers boast about 0ms cold starts and low latency but my little blog felt sluggish. So what'd I miss?
A global issue 🗺
Since the Worker's /post route hits the dev.to API directly, there's no caching involved. This is one of Cloudflare's primary features so I thought there would be some layer baked in but there's nothing of the sort. The problem here is summed up pretty well by this tweet:
Even though the blog worker is globally distributed, because it hits a (central) API for each request the turnaround time makes running at the edge pointless! This is especially the case for me as I live in Europe and the round-trip time to load the page is palpable.
If you haven't heard the term 'edge' before, it means a distributed network of servers that are geographically close to end users, enabling lower latency and faster load times. In this case, a request coming from Europe would hit a different server than one coming from the US but the content is the same.
I recently attended JSWorld Amsterdam and heard a talk that mentioned Workers KV which sounded exactly like the caching layer that would boost performance while still providing benefits of being at the edge. (And thankfully the free tier is more than sufficient for my needs.) This hypothesis was quickly verified by replacing the API call with a KV getter, decreasing time to first byte (server response delay) by ~10x!
From: 🐌
To: 🐆
Setting up KV 🗄
The steps to do this are pretty simple. Using the wrangler CLI you instantiate a binding to a variable name:
That variable is then available in your Worker and you can call get/put on it to retrieve/store data. With this I added a function to fetch post data, store it in the cache and track that it's been cached:
Subsequent index renders will pick up on the cached flag and render the link as /:slug, avoiding this redirect. I added an /update route (that requires a password matching an environment variable) that will update the index of posts.
Note: since the dev.to API rate-limits requests (somewhere around ~1 a second) and the Worker runtime length limit is 10ms it's not possible to cache all data in one request. So on-demand caching is needed in lieu of batching. But once the post is cached it doesn't need to be again!
And now I can officially say it's cached at the edge! The index needs to be refreshed when I write a new post but that's as simple as loading the /update route and waiting a few seconds. Super happy with the result!
Can check out the code here (& fork to deploy your own):
This project uses KV as a distributed store for article data and image placeholders.
To populate the cache, open the /update endpoint in your browser with the password set in environment passed via query parameter, e.g. /update?password=test.
Then generate an API token and set CF_API_TOKEN and CF_ACCOUNT_ID in actions secrets as well. The deploy action will automatically deploy via Wrangler.