Category: serverless

Does Lambda need timeout and memory size parameters?

Following my previous post on judging the serverlessness of a technology, I apply this criterion to AWS Lambda. I argue that the timeout and memory size configuration parameters are non-essential and should be made optional. The need to think about them makes Lambda less serverless than it could be.

On timeout

The way you naturally write a function is to finish as soon as possible. It’s just good engineering and good for business. Why then artificially limit its execution time?

The most common case I hear about using timeout is when a Lambda calls some external API. In this scenario, it is used as a fail-safe in case the API takes too long to respond. A better approach is to implement a timeout on the API call itself, in code, and fail the Lambda gracefully if it does not respond in time instead of relying on the runtime to terminate your function. That’s also good engineering.

So here’s my first #awswishlist entry: Make timeout optional and let functions run as long as they need to.

On memory size

I have two issues with the memory size parameter.

First of all, it’s a leaky abstraction of the underlying system. You don’t just specify how much memory your function gets, but also the CPU power. There’s a threshold where the Lambda container is assigned 2 vCPUs instead of 1. Last time I checked this was at 1024 MB, but there’s no way of knowing this unless you experiment with the platform. Since Lambda does not offer specialized CPU instances like EC2 does (yet?), it might not matter, but I worked on a data processing application where this came into play. Why not allow us to configure this directly? What if I need less memory but more vCPUs?

However a more serious point of contention for me is that setting the memory size is an issue of capacity planning. That’s something that should have gone away in the serverless world. You have to set it for the worst possible scenario as there’s no “auto-scaling”. It really sucks when your application starts failing because a Lambda function suddenly needs 135 MB of memory to finish.

Hence here’s my second #awswishlist entry: Make memory size optional. Or provide “burst capacity” for those times a Lambda crosses the threshold.

Now I won’t pretend I understand all the complexities that are behind operating the Lambda platform and I imagine this is an impossible request, but one can dream.

And while I’m at it, a third #awswishlist item is: Publish memory consumed by a Lambda function as a metric to CloudWatch.

Closing remarks

I do see value in setting either of these parameters, but I think those are specialized cases. For the vast majority of code deployed on Lambda, the platform should take care of “just doing the right thing” and allow us, developers, to think less about the ops side.

Advertisements

Thinking less about servers

Even though serverless has been around for a couple of years now, there is not a clear definition what the term actually means. Leaving aside that it’s a misnomer to begin with, I think part of the confusion stems from the fact that it is being applied to in two different ways. Serverless can either describe a quality of a technology (DynamoDB) or it can refer to an approach of building IT systems (a serverless chat-bot).

My way to judging the former is this:

The less you have to think about servers the more serverless a technology is. Furthermore, serverless is not a binary value but a spectrum.

Let me give an example. On a completely arbitrary scale from 1 to 10, I would rate DynamoDB with provisioned capacity as 8/10 serverless. It’s not fully serverless because I still need to think deeply about data access patterns, predict read and write load and monitor utilization once my system is operational. However, with the recent announcement of on demand pricing, I would rate DynamoDB 10/10. I don’t need to think about any of these aforementioned idiosyncrasies (burdens, really) of using the technology.

The second aspect of a serverless technology (and by proxy also a system) is that you don’t pay for idle except for data storage. Once again, if you need to think about something even if it’s not running (and clearly you’re going to think about your credit card bill), it is not serverless.

This is the promise of serverless. Once you start combining these technologies into systems, you can think about and focus on building value and leave the operational cost on the technology provider.