Discover more from The Serverless Mindset
Is Serverless Cheaper? 3 Questions That Will Help You Find Out
Your architecture has real implications on your costs too
How do you know if adopting serverless will make running your application cheaper?
Given the pay-as-you-go model as well as the generous free tier that accompanies the main serverless services, it is easy to think that just by virtue of going serverless our costs should just be lower across the board.
Unfortunately, it is all too common for people to be caught off guard when costs appear to go out of control as their application grows in size and traffic.
There are of course various ways of keeping costs under control, and they are probably all valid and helpful.
I'd like to suggest three questions that can help us approach the problem from a more architectural point of view. We can ask ourselves these questions as we plan, design, and build our application at every growth stage.
#1 Are We Fully Optimised?
Virtually every cloud service comes with optimisation recommendations.
But of course, as the application grows in size, capabilities, and traffic, optimisations are required.
If the size of our Lambda functions is too big, our costs are bound to get higher due to the longer execution times and the extra memory required to run those Lambdas. Similarly, if we're not being rigorous about the way we model our data in DynamoDB, we're likely to end up having to make multiple round trip requests every time we need some data. That, needless to say, can get expensive.
These are just some common examples, but the bottom line is that for almost every service there is either a more efficient or a more expensive way to operate them. If in doubt, referring to the service's User Guides as well as the community out there, is the way to go.
#2 Are We Able to Isolate Different Workloads?
Each workload, API endpoint, or section of our application is likely going to be under a different type of stress.
This is, of course, a key understanding of modern applications that is behind much of the drive towards microservices. It's the idea that attempting to optimise the entire application is wasteful and inefficient.
On the other hand, by leveraging microservices (or at least some version of a distributed architecture) we get to ensure that each part of the application has exactly the capacity it needs2. No more, no less.
This type of control over each and every area of our application is really useful when trying to keep our costs down.
#3 Are We Able to Delegate the Non-Urgent or Non-Immediate?
Not everything needs to be done straight away. Rather, it can be useful for non-urgent tasks to be executed asynchronously. This is what is enabled by a well-designed event-driven architecture.
The primary benefit of an event-driven architecture is, clearly, increased performance and responsiveness to the end user.
But I believe that it can be helpful in keeping costs down as well.
For example, we can cut down on the length of our live requests (since we don't need to process everything straight away). We can also batch together similar tasks (like performing a single Query to grab multiple items from the same table all at once). Or we can run our non-urgent operations on lower spec compute (e.g. a Lambda with very low memory) since we're not too worried about the user having to wait for our response.
These are all scenarios where non-asynchronous alternative would be more expensive.
As I said at the start, these questions are by no means the only ones you should be asking when trying to keep costs under control. But they are architectural health checks that have real implications on the overall cost of a serverless application.
This is why I recommend that most projects should start as a Serverless Monolith: early on, the size of your Lambda functions, or your DynamoDB data modelling strategy is not that relevant. You're still going to get a more than reasonable performance from those services, and you're better off focusing on the core business logic of your product.