Souring on Serverless

levinunnink

Levi Nunnink

Posted on December 7, 2023

Souring on Serverless

For those who don’t know, Serverless computing is a cloud computing execution model where the cloud provider dynamically manages the allocation of machine resources. In a serverless architecture, developers can build and run applications and services “without having to manage the infrastructure”.

The big draw for me as someone who has no desire to ever SSH into a server again was the idea of a “resourceless” function that just runs when I need it. Even more magical was the idea of auto-scaling: The ability to handle any load of traffic without ever having to think about it is tantalizing. And I still think it is… in theory.

Cold starts are killer when you need a quick response

One of the key things to know when building an application is what needs to be synchronous and what can be asynchronous. HTTP is a synchronous protocol, meaning that it needs to answer immediately when someone makes a request. This should have been my first clue that making HTTP requests to an infrastructure that needed to start up before it could respond wasn’t exactly a recipe for success.

The end result is a web application that feels sluggish, where the loading states just linger longer than feels right. You can do static rendering, CDN caching, but if your API has to start up when responding to a request, it’s gonna feel slow.

I really noticed this on Smmall Cloud. That bouncing cloud was just getting too familiar.

The “solutions” don’t really work

When a function is triggered in a serverless environment like AWS Lambda, the platform needs to initialize resources (such as setting up the execution environment, loading dependencies, etc.) before the function can start executing. But then there’s a “warm start” where the resources are already initialized and ready to execute. This happens in a situation where you’re hitting the function multiple times before it shuts down and returns its resources to the cloud. Warm Start vs Cold Start might be a good technical design for AWS but it’s a nightmare for debugging performance on a web application. Think about a situation where sometimes your page loads fast and sometimes it takes forever.

Serverless apps can be configured with “pre-provisioned concurrency”, effectively keeping one or two resources warm and reducing cold-start times. But, in my experience, this still has latency associated with it. And I don’t think I’m the only one.

If you step back the idea of “pre-provisioned concurrency” ruins one of the key selling points of Serverless in the first place: not using resources when you don’t need them. If you have a service that’s designed to shut down when it’s idle but you insert a mechanism to make sure it never shuts down, ask yourself, are you actually using the right service?

Maybe an always-on API is a good idea after all

Finally after being tormented by the bouncing cloud one to many times, I decided to give Digital Ocean a spin and see what they had to offer. Within about 30 minutes I had my API deployed and running on their servers. And guess what? It’s faster. And it’s cheap too.

Thus dies the Serverless dream.

I’ll still use it for asynchronous processing. (The Sheet Monkey ETL works great on Serverless with SQS.)

But moving forward, when I’m building HTTP, it’s back to servers for me.

💖 💪 🙅 🚩
levinunnink
Levi Nunnink

Posted on December 7, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Souring on Serverless
serverless Souring on Serverless

December 7, 2023