Characteristics of serverless architecture

August 20, 2021

iauro Team

Contributing immensely to the global software solutions ecosystem. DevOps | Microservices | Microfrontend | DesignThinking LinkedIn
Despite its benefits, serverless architecture is largely studied in the literature only from a benefit perspective. Many of the articles – and the examples used – are from cloud service providers, so it’s no surprise that they talk about positive things. This article attempts to better understand the specifics of a serverless architecture.

I deliberately chose the word “trait” rather than “characteristic” because these are elements of a serverless architecture that cannot be changed. The characteristics are malleable, the traits are inherent. Personality traits are also neutral, so they are neither positive nor negative. In some cases, the type of trait I will describe may have a positive connotation, but I will remain neutral on this point so that you understand what you are facing.

Personality traits are also inherent, so you have to accept them rather than fight them because such attempts can be quite costly. Characteristics, on the other hand, make an effort to shape, and you can still be wrong.

I must also point you to this article by Mike Roberts, who also explores the specifics of serverless services. Although we use the same terminology here, it is helpful to note that this article focuses on the specifics of your architecture, not the services you use.

This article does not seek to help you understand all the topics in detail but gives you a general overview of what awaits you. This article defines the following features of serverless architecture:

Low entry barrier

No host

Stateless

Elasticity

Distributed

Eventful

1. Low entry barrier

It’s relatively easy to get started running code in a serverless architecture. You can follow any tutorial to get started and run your code in a production-grade ecosystem. In many ways, the learning curve for serverless architecture is less complex than the learning curve for typical DevOps skills – many elements of DevOps are unnecessary when you are transitioning to a serverless architecture. For example, you won’t need to acquire server management skills such as configuration management or patching. This is why a low barrier to entry is one of the features of a serverless architecture.

This means that developers initially have a lower learning curve than many other architectural styles. This does not mean that the learning curve will remain low, and indeed, the overall learning curve gets steeper as developers continue their journey.

As a result of this architectural trait, I saw many new developers join projects very quickly and they were able to contribute effectively to the project. The ability for developers to move quickly to a new speed may be one of the reasons serverless projects are getting to market faster.

However, as we noted, things get more complicated. For example, things like infrastructure like code, log management, monitoring, and sometimes networking are still important. And you must understand how to achieve this in a serverless world. If you have a different development background, there are a number of serverless architecture features that will be covered in this article.

One thing I’ve noticed is that some developers tend to think that serverless architecture means they don’t have to think about code design. The rationale is that they just deal with functions, so the design of the code doesn’t matter. In fact, design principles like SOLID still apply – you can’t outsource your code support to your serverless platform. While you can simply link and upload your code to the cloud to run it, I strongly discourage doing this as continuous delivery methods are still relevant in a serverless architecture.

2. No host

One of the obvious features of serverless architecture is the idea that you no longer deal directly with the servers. In this day and age, when you have a variety of hosts on which you can install and run a service – be it physical machines, virtual machines, containers, etc. it is useful to describe it in one word. To avoid using the already overloaded term serverless, I’m going to use the word host1 here, hence the name of the trait, hostless.

One of the advantages of not having a host is significantly less server overhead. You don’t have to worry about updating your servers and security fixes will be applied automatically. The lack of a host also means that you will be tracking various metrics in your application. This is because most of the underlying services you are using will not publish traditional metrics like CPU, memory, disk size, etc. This means you no longer need to interpret the low-level working details of your architecture.

But different monitoring metrics mean you’ll have to re-learn how to tweak your architecture. AWS DynamoDB provides read and writes capabilities that you can monitor and customize. This is a concept you need to understand and the knowledge gained cannot be transferred to other serverless platforms. Each of the services you use also has its own limitations. AWS Lambda has a limit on the number of concurrent executions, not the number of CPU cores you have. To make it a little more quirky, resizing your Lambda’s memory allocation will change the number of CPU cores you get. If you are using the same AWS account for both performance testing and production environments, you could disrupt your production environment if a performance test unexpectedly exhausts all of your concurrency limits. AWS documents the constraints for each of these services pretty well, so be sure to check them out to make the right architectural decisions.

Traditional protection is not applicable because the serverless architecture has different attack vectors. Your application security practices will still apply, and keeping secrets in your code is still a big ban. AWS has outlined this in its shared responsibility model, in which, for example, you still need to protect your data if it contains sensitive information.

While you have significantly lower operating costs, it is worth noting that on rare occasions you still need to manage the impact of changing the underlying server. Your application might rely on native libraries, and you will need to make sure they still work when the underlying operating system is updated.

3. Stateless

Functions as a Service, or FaaS, are ephemeral, so you cannot store anything in memory because the compute containers that run your code will be automatically created and destroyed by your platform. Thus, statelessness is a feature of the serverless architecture.

Statelessness is a good trait for scale-out applications. The idea behind statelessness is that you are not advised to store state in your

application. By not keeping the state in the app, you can deploy more instances without worrying about the state of the app to scale out. What interests me is that you are actually forced to be stateless, so the chances of errors are greatly reduced. Yes, there are some caveats: for example, compute containers can be reused and you can store states, but if you take this approach, proceed with caution.

As far as application development is concerned, you won’t be able to use technology that requires states as the burden of state management falls on the caller. For example, you can’t use HTTP sessions because you don’t have a traditional web server with persistent file storage. If you want to use a technology that requires a state like WebSockets, you need to wait until the backend as a Service supports it, or apply your own workaround.

4. Elasticity

Since your architecture is hostless, your architecture will be flexible as well. Most of the serverless services you use are designed to be very resilient, where you could scale from zero to maximum and then back to zero, mostly automatically managed. Elasticity is a hallmark of serverless architecture.

The advantage of elasticity is huge for scalability. This means you don’t have to manually manage resource scaling. Many resource allocation problems disappear. In some cases, elasticity can only mean that you only pay for what you use, so you can lower your running costs if you have low usage.

You may need to integrate the serverless architecture with legacy systems that do not support this elasticity. When this happens, you can break your slave systems, as they won’t be able to scale in the same way as your serverless architecture. If your slave systems are mission-critical, it is important to think about how you are going to mitigate this problem – perhaps by limiting AWS Lambda concurrency or by using a queue to talk to your slave systems.

Although denial of service will be more difficult with this high elasticity, you will instead be vulnerable to denial of service attacks. This is where an attacker tries to hack the application, forcing you to exceed the limits of your cloud account, increasing the allocation of your resources. To prevent this attack, you can use DDoS protection such as AWS Shield in your application. It is also helpful in AWS to set AWS budgets to be notified of a skyrocketing cloud bill. If high elasticity is not what you are expecting here, it is useful to set the constraint for your application again, for example by limiting the concurrency of AWS Lambda.

5. Distributed

Since stateless computation is a feature, all your persistence requirements will be stored in the backend as a service (BaaS), usually a combination of the two. If you use FaaS more, you will also find that your deployment units are less functional than you could possibly use. As a result, the serverless architecture is distributed by default – and there are many components that you need to integrate with over the network. Your architecture will also consist of related services such as authentication, database, distributed queue, and so on.

As we discussed earlier, distributed systems have many advantages, including elasticity. The distribution also provides a single region, high availability by default for your architecture. In a serverless context, when one AZ goes down in your cloud provider’s region, your architecture will be able to use other AZs that are still working – all of which will be opaque from a developer perspective.

There is always a trade-off when choosing architecture. This makes you lose stability while being available. Typically in the cloud, each serverless service also has its own consistency model. In AWS S3, for example, you will get read-after-write consistency for PUTS of new objects in your S3 bucket. For object updates, S3 is ultimately agreed upon. You quite often have to decide which BaaS to use, so keep an eye on the behavior of their consistency model.

Another problem is the methods of distributed message delivery. You should be familiar with and understand the complex problem of one-time delivery, for example, because the general method for delivering messages for a distributed queue is at least one-time delivery. AWS Lambda can be called more than once due to this delivery method, so you must ensure that your implementation is idempotent (it is also important to understand your FaaS retry behavior, where AWS Lambda can run more than once in the event of a failure.).Other issues you need to understand include the behavior of distributed transactions. However, resources for learning how to build distributed systems are constantly improving as the popularity of microservices grows.

6. Eventful

Many of the BaaS provided by your serverless platform naturally support events. This is a good strategy for third-party services to provide extensibility to their users since you won’t have any control over their service code. Since you will be using a lot of BaaS in your serverless architecture, your architecture is event-driven.

I also admit that you can, even if your architecture is an event-driven trait, this does not mean that you need to fully embrace an event-driven architecture. However, I have noticed that teams tend to use event-driven architecture when presented naturally to them. This is a similar idea with elasticity as a trait, you can turn it off anyway.

There are many advantages to focusing on events. You will have a low level of communication between the components of your architecture.

Cloud service providers, especially in the cloud, will make sure your FaaS integrates seamlessly with their BaaS. FaaS can be triggered by design event notifications.

The downside of event-driven architecture is that you can start to lose a coherent view of the system as a whole. This makes it difficult to troubleshoot the system. Distributed tracing is an area you should be looking at, even though it is still in its infancy in a serverless architecture. AWS X-Ray is a service that you can use out of the box with AWS.X-Ray does have its limitations, and if you’ve outgrown it, you should keep an eye out for this space as third-party offerings appear. This is the reason that the practice of registering correlation IDs is important, especially if you are using multiple BaaS in a transaction. So make sure you use correlation IDs.

In this article, We’ve covered six features of serverless architecture: low barrier to entry, hostless, stateless, elastic, distributed, and event-driven. I intend to do my best so that you can adapt the serverless architecture well. Serverless architecture has led to an interesting paradigm shift that has improved many aspects of software development. But it also poses new challenges that technologists have to get used to. There are also brief guidelines on how you can address the problems each trait can cause, so hopefully, these issues won’t stop you from adopting a serverless architecture.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe for updates