Serverless Revolution: the Good, the Bad and the Ugly
“It’s stupidity. It’s worse than stupidity: it’s a marketing hype campaign.”
‐ Richard Stallman commenting on cloud computing, Sep 2008
And, after 10 years, you are beginning to think twice when someone mentions the word: is it that thing in the sky, or that other thing that is expected to host 83% of the world’s enterprise workloads by 2020?
Another revolution is underway, whether you like it or not. AWS is on the lead, with MS Azure and GCP following closely behind, all cherishing a common goal:
Untethering software from infra.
Serverless.
FaaS.
Death of DevOps.
You name it.
Regardless of the name (for the sake of convenience, we shall call the beast “serverless”), this new paradigm is already doing its part in reshaping the software landscape. We already see giants like Coca-Cola adopting serverless components into their production stacks, and frameworks like Serverless gaining funding in the millions. Nevertheless, we should keep in mind that serverless is not for everyone, everywhere, everytime—at least not so far.
Server(less) = State(less)
As a conventional programmer, the biggest “barrier” I see when it comes to serverless, is the “statelessness”. Whereas earlier I could be fairly certain that the complex calculation result that I stored in memory; or the fairly large metadata file I extracted into /tmp
; or the helper subprocess that I just spawned; would be still there once my program is back in control, serverless shatters pretty much all of those assumptions. Although implementations like lambda tend to retain state for a while, the general contract is that your application should be able to abandon all hope and gracefully start from zero in case it was invoked with a clean slate. No longer are there in-memory states: if you wanna save, you save. You don’t, you lose.
Thinking from another angle, this might also be considered one of the (unintended) great strengths of serverless; because transient state (whose mere existence is made possible by “serverful” architecture) is the root of most—if not all—evil. Now you have, by design, less room for making mistakes—which could be a fair trade-off, especially for notorious programmers like myself, seeking (often premature) optimization via in-memory state management.
Nevertheless, we should not forget the performance impairments caused by the diminishing of in-memory state management and caching capacity; your state manager (data store), which was formerly a few “circuit hops” away, would now be several network hops away, leading to several milliseconds—perhaps even seconds—of latency, along with more room for failures as well.
Sub-second billing
If you had been alive in the last decade, you would have seen it coming: everything gradually moving into the pay-as-you-go model. Now it has gone to such lengths that lambdas are charged at 0.1-second execution intervals—and the quantization will continue. While this may not mean much advantage—and sometimes may even mean disadvantage—for persistent loads, applications with high load variance could gain immense advantage from not having to provision and pay for their expected peak load all the time. Not to mention event-driven and batch-processor systems with sparse load profiles which may enjoy savings at an order of magnitude, especially when they are small-scale and geographically localized.
Additionally, the new pay-per-resource-usage model (given that time—or execution time, to be specific—is also a highly-valued resource) encourages performance-oriented programming, which is a good sign indeed. FaaS providers usually use composite billing metrics, combining execution time with memory allocation etc., further strengthening the incentive for balanced optimization, ultimately yielding better resource utilization, less wastage and the resulting financial and environmental benefits.
Invisible infra
In the place of physical hardware, virtualized (later) or containerized (still later) OS environments, now you only get to see a single process: effectively a single function or unit of work. While this may sound great at first (no more infra/hardware/OS/support utility monitoring or maintenance—hoping the serverless provider would take care of them for us!), it also means a huge setback in terms of flexibility: even in the days of containers we at least had the flexibility to choose the base OS of our liking (despite still being bound to the underlying kernel), whereas all we now have is the choice of the programming language (and its version, sometimes). However, those who have experienced the headaches of devops would certainly agree that the latter is a very justifiable trade-off.
Stronger isolation
Since you no longer have access to the real world (you would generally be a short-lived containerized process), there is less room for mistakes (inevitable, because there’s actually less that you can do!). Even if you are compromised, your short life and limited privileges can prevent further contamination, unless the exploit is strong enough to affect the underlying orchestration framework. It follows that, unfortunately, if such a vulnerability is ever discovered, it could be exploited big-time because a lambda-based malware host would be more scalable than ever.
Most providers deliberately restrict lambdas from attempting malicious activities, such as sending spam email, which would be frowned upon by legitimate users but praised by the spam-haunted (imagine a monthly spike of millions of lambda runtimes—AWS already offers one million free invocations and 3.2 million seconds of execution time— sending spam emails to a set of users; a dozen free AWS subscriptions would give an attacker a substantial edge!)
Vendor locking: a side effect?
This is an inherent concern with every cloud platform—or, if you think carefully—any platform, utility or service. The moment you decide to leverage a “cool” or “advanced” feature of the platform, you are effectively coupled to it. This is true, more than ever, for serverless platforms: except for the language constructs, pretty much everything else is provider-specific, and attempting to write a “universal” function would end up in either an indecipherably complex pile of hacks and reinvented wheels, or, most probably, nothing.
In a sense, this is an essential and inevitable pay-off; if you have to be special, you have to be specific! Frameworks like Serverless are actively trying to resolve this, but as per the general opinion a versatile solution is still far-fetched.
With great power comes great responsibility
Given their simplicity, versatility and scalability, serverless applications can be a valuable asset for a company’s IT infra; however, if not designed, deployed, managed and monitored properly, things can get out of hand very easily, both in terms of architectural complexity and financial concerns. So, knowing how to tame the beast is way more important than simply learning what the beast can do.
Best of luck with your serverless adventures!
Published on Java Code Geeks with permission by Janaka Bandara, partner at our JCG program. See the original article here: Serverless Revolution: the Good, the Bad and the Ugly Opinions expressed by Java Code Geeks contributors are their own. |