These days, cloud solution providers everywhere are marketing “serverless” solutions. While serverless computing is not at all new, it is getting a lot more attention due to the increasing popularity of DevOps and cloud in general. The promise is that serverless will eventually replace how developers currently create software and how this software is managed in production by operations.
This article provides an introductory look at serverless. What exactly is it? And what additional advantages does it offer to the DevOps process?
What Is Serverless?
Serverless Computing is also sometimes referred to as function as a service (FaaS). It is a code execution model in cloud computing where the cloud provider FULLY manages everything (insfrastructure, platform, containers, etcetera) needed to serve function requests or calls. This includes starting and stopping any underlying platform as a service (PaaS) as required. In the serverless model, the cloud provider generally bills requests by somehow measuring the amount of resources consumed by the request. This differs from the billing model of traditional servers or virtual machines which is usually per VM, per hour.
The goal of serverless is to completely separate the developer from server infrastructure so that they can focus on their primary responsibilities of software development. The cloud provider manages the operating system, physical server machines, and all the traditional tasks associated with running a server.
Serverless – Forget Machines. Think Functions.
From a developer’s perspective, with serverless computing, your thinking changes from what machines are doing to what functions are doing. The three dominant cloud providers offer serverless computing services in the form of AWS Lambda, Azure Functions, and Google Cloud Functions.
In each of the above three serverless examples, the consumer/developer neither needs to know or care what type of physical infrastructure is running behind the scenes.
Sometimes, I almost want to think of serverless computing as the lazy developer’s approach. But hey, where would we really be in tech if it wasn’t for healthy doses of laziness 🙂
Like Agatha Christie once said:
I don’t think necessity is the mother of invention. Invention, in my opinion, arises directly from idleness, possibly also from laziness – to save oneself trouble.
I completely agree with the above quote. It can be argued that a lot of the automation we do as DevOps people stems from the “lazy” desire to save ourselves from monotonous tasks (trouble).
One thing that should be clear from the above is that, serverless DOES NOT mean that no servers are involved in code execution. There’s still a server somewhere. You just don’t need to buy, manage, or maintain it anymore.
Pros and Cons
Like any other technology, serverless comes with it’s own unique set of advantages and disadvantages. As a cloud computing advocate, I have strong opinions on the matter and probably some strong biases too. However, I’ll try to keep the pros and cons below as objective as possible.
Here are some of the obvious advantages of serverless:
- Server software and hardware does not need to be managed by the developer: I personally manage a bunch of servers myself. However, I have noticed in the development community that this isn’t something most devs are keen on doing. One reason for this lack of interest in infrastructure is that, generally, most developers are pretty good at one thing – coding. Serverless allows them to focus on their primary strength and takes infrastructure headaches out of the picture.
- Scalability comes for “free” and is easy to achieve: Anyone building software in the cloud is aware of the huge scalability advantages that cloud computing offers. However, in the traditional model of virtual machines and servers, really efficient scalability is not exactly easy to achieve and usually requires the skillset of solution architects and DevOps engineers. With serverless, since you’re just paying to run a function, the cloud provider easily allocates more hardware to run your code as scalability needs arise. So technically speaking, while traditional virtual machines allow architects to design scalable and elastic solutions, serverless computing may already offer scalability and elasticity out-of-the-box.
- Cheaper than traditional cloud (VMs): Serverless allows you to pay a fraction of the price per request. This way, startups can build a minimum viable product (MVP) nearly for free and then ease into the market without dealing with huge bills for minimum traffic.
- Lower human resources costs: Just as you don’t have to spend hundreds or thousands of dollars on hardware, you also wouldn’t need to pay engineers to maintain it.
Word of caution though… If you build on serverless architecture using some of the fancy tooling available and then later decide to migrate to traditional cloud, you will need to pay an engineer initial migration costs in addition to regular maintenance costs. And depending on the number of technologies involved, it may be relatively tough (and expensive) to find a skilled architect who is available and interested in taking on your project. I am, of course, always available to discuss your project.
With all of the above advantages, why isn’t everyone using serverless architecture? While the above advantages might look too-good-to-be-true, the disadvantages provide some sort of reality check. Here goes…
- Vendor lock-in: This one is really bad in my opinion. For example, I would hate to be stuck with AWS Lambda even though I later discover that Azure functions would better suit my particular use-case (and vice-versa). As I have often commented, solution architecture should at least attempt to be technology agnostic. With the traditional cloud computing model, technology agnosticism is quite possible and relatively easy to achieve. But with serverless (in its current state), not so much. Even coding languanges could be a limiting factor when it comes to serverless options (at least as of this writing). Currently, only Node.js and Python developers seem to have the autonomy to choose between any of the available serverless options.
- Not great for long-running applications: Sometimes it may actually be cheaper and more efficient to perform long-running operations on dedicated servers or on a traditional cloud. For example, AWS Lambda gives you five minutes to execute a task and if it takes longer, you’ll have to call another function.
- Steep learning curve: Each of the major cloud vendors provide loads of documentation for their serverless offerings. But you might quickly discover that this could be a case of documentation overload. As simple as it sounds to just run functions in the cloud, the associated learning curve is actually pretty steep (and vendor-specific too). You may need to work with microservices too, which is another complicated task in itself.
- No local stored data: There are no local operations and you cannot assume that two communicating functions are located on the same server. This means that your applications have to be stateless. While statelessness is a good thing in many cases, it significantly differs from traditional development patterns and is a limit nevertheless.
Serverless Computing Use Cases
I have very often solved business problems with some code that needs to be executed at certain intervals. SharePoint timer jobs would be an example of such interval based code. But SharePoint timer jobs need servers to run. There are many other scenarios where the code in our recurring job does not really care about the underlying server. For example, you may have some code to perform a periodic (nightly) extract, transform, and load. Further, this may be the only piece of code that needs to be executed for a particular client. On a linux machine, such code can easily be scheduled to run automatically via cron. In the traditional cloud computing model, even for such code that doesn’t care about underlying architecture, we would still generally spin up a VM (like an EC2 instance) for it.
With Amazon Lambda and other serverless options, you can take that your “server-agnostic” task and run it on the service as a pure function. You can even schedule it. No longer would that client need a piece of infrastructure for such a simple once-a-night thing.
Another example where serverless functions can nicely be applied would be code for performing certain social media automation tasks. Social media managers and consultants often use an array of tools to automate and schedule things like tweets, Facebook posts, etc. Personally, I use automation for certain social media tasks as well. But I have found most of the available tools quite limited when it comes to the options/features they provide. So, for many years, I have been writing my own social media automation code. Even though I run most of such code on regular server architecture (because I already have quite a number of servers running 24/7 and performing other tasks), I have evaluated such tasks to be good candidates for serverless computing. Maybe I’ll migrate them at some point.
Other popular areas of application for serverless architecture include: Internet of things (IoT), virtual assistants and chat bots, image-rich applications, agile and continuous integration pipelines, etc.
I use a serverless chat bot on this site from time to time as one way my site visitors can contact me. For chat bots, data processing must be much faster than regular backends. Which is why in the industry, there is heavy use of serverless technology when building chat bots.
Is serverless the future?
For most organizations, the switch to serveless will require more than just a technical change. It will require a huge mindset change as well. From both a technical and a mindset perspective, the migration to serverless could be painful and perhaps not as cost-effective as it may initially seem. The pains become even more obvious if your organization already has an established workflow. In such a case, there would need to be a very strong justification for adopting FaaS tools. Another thing to note is that, at least as of this writing (May 29, 2018), serverless is still far from mainstream. However, it is moving in that direction pretty fast.
Serverless technologies continue to grow at ridiculous rates – along with machine learning, DevOps, VR and IoT. The technology is already available. As time passes, the breadth of possible use cases will become more clearly defined and the hope is that all vendors will gradually start providing greater language support and more functionality options.
Leave a Reply