The hourly need to build cloud-native applications

October 4, 2021

iauro Team

Contributing immensely to the global software solutions ecosystem. DevOps | Microservices | Microfrontend | DesignThinking LinkedIn

The nature of IT is changing: From an internal focus on process automation to an external market-oriented business offering. IT is changing from “business support” to “being business”.

This poses many challenges for “IT business as usual”, including development and operations processes, learning the language and behavior of the minimum viable product (MVP), building teams that combine IT and business personnel, and finding the right mix of skills.

Of all the challenges that IT departments face in transitioning to their new role, there is perhaps nothing more difficult than developing applications to support “business”. Simply put, traditional application architectures, operations, and development rates are completely inadequate in this new world. For this new paradigm of applications, the phrase “cloud applications” is often used. This is a good way to characterize applications that tend to have faster release cycles and very volatile workloads, and use DevOps as a cross-group process methodology.
 

The five elements of a cloud-native app

The question is, what are the elements of a cloud application? There are five important aspects:
Application design: Moving to microservices
API Access: Internal and External Access Using Standardized Methods
Operational Integration: Aggregates log and monitoring data for application management.
DevOps: Automating the Application Lifecycle
Testing: Changing the Role and Use of Quality Assurance (QA)

Each of these elements is an important part of bringing a cloud application into production. Failure to resolve any of these may result in the application failing to satisfy external users and internal customers. However, their successful solution increases the likelihood of creating an application that meets the needs of an important business initiative.

 

1. Application design: Moving to microservices.

One of the biggest requirements for cloud applications is speed: Fast provisioning and repetition of application functionality. One of the biggest hurdles is traditional application architecture. In such applications, all the code that makes up the application are combined into a single monolithic executable file.

This simplifies deployment – only one executable file can be installed on a server – 

but greatly complicates development. Any code change, however minor, requires rebuilding the entire executable. This, in turn, requires integrating and testing code from the developers or even the development teams involved in building the application, even if their code hasn’t changed at all!
For typical IT teams, integration and testing can take up to two weeks. Worse, development teams can only release new features during the release window because the overhead of the integration process limits the frequency of releases. As a result, monolithic application architectures reduce the frequency of updates and limit the company’s ability to respond to market demands.

In contrast, the microservices approach pioneered by streaming video provider Netflix reduces the integration and testing burden by changing the deployment model for executables. Instead of one large executable file, microservices break a single application into several separately executable functional components.

This deconstruction allows each functional component to be updated and deployed without affecting other parts of the application. In turn, this reduces many of the problems with monolithic architecture:

Each microservice can be updated independently on a schedule that matches the functionality it contains since all microservice executables run separately. So, for example, if one microservice provides functionality related to a business proposal that changes rapidly (for example, an e-commerce user promotion proposal), that microservice can be changed frequently without affecting those parts of the application that change less frequently (for example, the identity user).

Reduced integration overhead for faster deployment of functionality. Since each microservice operates independently, there is no need to integrate its updated code with code from other microservices. This reduces (or eliminates) integration efforts, which greatly speeds up the deployment of new features.

Unstable workloads are much easier to handle. In monolithic architectures, traffic spikes require you to install the entire executable and attach it to the application pool. In the case of very large executables, this can take a significant amount of time, making it difficult to respond to volatile

traffic loads. Moreover, if only part of the application is heavily used (for example, the video serving function), it is still necessary to deploy copies of the entire monolithic executable, resulting in a waste of resources. When using microservices, you only need to scale those components that are associated with a specific function (for example, serving video), which reduces scaling times and wasted resources.

Testing is simplified because only the functionality associated with a specific microservice needs to be tested, not a code change that requires the entire application to pass the test.

Of course, the move to a microservice architecture raises its problems. You will face two main problems: how to separate the functionality into separate services, and how to connect the individual microservices so that the aggregate serves as a complete application. You are solving the second problem with connecting to the API. I will return to this in the next section.
 

Partitioning

Correct partitioning is extremely important. Granular microservices enable rapid iteration of functionality and reduced integration efforts, but add complexity to application monitoring and management. Coarser microservices make it easier to monitor and manage applications but may require integration due to more functionality that needs to be aggregated.

One way to separate these functions is to find natural groups and make them separate microservices. For example, functions related to user identification and authentication are usually standalone and are often used by multiple applications. In this case, it would be wise to separate it into a separate microservice.

Another strategy is to follow Conway’s Law, which notes that systems design often reflects organizational structures. In the context of microservices, this means that the natural way to separate microservices is to look at how your tech teams are organized. If you have a “payment transactions” team, it can be a good individual microservice and so on, within the organizational structure. Of course, you need to look closely to make sure that these structures make sense as a mirrored architecture for microservices, but microservices that span all organizations will also be complex.

Overall, the move to microservices is an important and growing trend. The microservices architecture provides good support for cloud applications with their unique needs.
 

2. API Access: Internal and External Access Using Standardized Methods

One of the obvious challenges in microservices application architecture is getting different services to interact – how to accept requests for services and return data. In addition, the client-facing “client” microservice needs to respond to user requests from a browser, mobile phone, or other types of device.

Use a RESTful API to handle interactions in microservice applications. These APIs offer an

interface that can be called over a standardized protocol. This makes it easier for outside callers, whether from another service located on the same local network or from the Internet, to know how to format a service request.

Each service views its API as a “contract”. If the call is properly formatted, has the proper identification and authentication, and carries the correct payload, the service will execute and respond with the appropriate data. Conceptually, APIs are pretty simple, but in practice, there are several elements that are vital in making APIs a viable connectivity mechanism. This includes:
 
API Versioning: One of the great benefits of a microservices architecture is the ability to update

functionality frequently. Sometimes new functions require a new API format due to additional required arguments, different payloads returned, etc. Simply updating an existing API that requires callers to interact with the updated information is a bad strategy because until the calling APIs are updated to support the new format, things will break. Therefore, you must preserve the existing API format and behavior for each microservice, while simultaneously 

providing a new version of the API that supports the new format and behavior.
 
Throttling: Too many API calls can overwhelm the service’s ability to respond, degrading the
performance of the entire application. Moreover, external APIs can be affected by distributed denial-of-service attacks that attempt to break into traffic-intensive applications. For these reasons, it is important to keep track of API traffic calls and reduce traffic during periods of very high load by rejecting calls.

 
Circuit breakers: Just as a service can find its API under too much load, a service may not be

able to respond quickly enough to legitimate requests. Since application performance can be degraded by the slow response of the microservice, “circuit breakers” are an important part of every service. The circuit breaker sets the stopwatch to the response time of the microservice.
If the service takes too long to respond, it stops execution and returns idle data

that allows the entire application to run. To do this, of course, the application developer must inline the fallback data and prepare the service to return it if the request cannot be fully fulfilled.
 
Data Caching: This is a mechanism to support circuit breakers, but you can also use it while

running regular services. If portions of the service data do not change frequently (for example, the user’s home address), it might make sense to cache that data and return it instead of requiring a database lookup every time the user is checked out.

These four elements demonstrate that the transition to a microservices architecture imposes additional requirements beyond just exposing the interface mechanism. Despite these complexities, APIs aren’t going anywhere. They are more useful, flexible, and user-friendly than any other front-end mechanism.

3. Operational Integration: Aggregates log and monitoring data for application management.

One of the biggest challenges for operations in traditional environments is the overhead of porting new code releases to production. Because monolithic architectures bundle all of the application code into a single executable file, new code releases require the entire application to be deployed. This often causes problems because:

A production environment is significantly different from a development environment. A classic refrain when encountering bugs in a production environment is the developer saying, “It worked in my environment!”

It is difficult to test new functionality in a production environment without migrating the entire environment to a new version of the application. It takes a lot of effort to release new code.

In most production environments, there is no way to revert code changes with new features, so if there are problems with the code, you need to recreate the previous production environment in an emergency.

Microservices can greatly simplify this situation.
Because the environment is partitioned, code changes are split into specific executables, allowing updates to leave most of the application intact. This makes the process of making changes to the code simpler, easier, and less risky.

In addition, because most microservice environments have redundancy for each microservice, you can gradually roll out new functionality by taking out part of the microservice pool and creating a replacement instance that represents the new code. In general, microservices remain healthy during this transition because at least one instance is always running, and updates can be performed during normal business hours when the most experienced employees are on duty.

Also, since microservices communicate using APIs, you can expose new functionality by providing a new version of the API. If the new feature proves to be prone to crashes, you can configure the API Management System to disable access to the new API, ensuring that the old version of the application is still running while you fix the new feature. This allows you to revert your code changes and fixes one of the more frustrating issues with monolithic code updates.

However, microservices are not a panacea. This new application architecture poses challenges for operations teams that require IT organizations to prepare new monitoring and management systems that can handle microservice-based applications.

One immediate problem is that there are many more executables that make up a given application. For this reason, monitoring and control systems must be prepared to include many more data sources and make them understandable (and useful) to operational personnel.

Here are some monitoring and control elements to consider in a microservice architecture: Dynamic application topologies. In production environments, instances of microservices come and go in response to code updates, application loads, and underlying 

resource failures (i.e., the server running the microservice crashes or becomes unavailable on the network). Because instances are ephemeral, application monitoring and management systems must be flexible in terms of the ability to attach microservice instances to and from an application topology.

Centralized logging and monitoring. The ephemeral nature of the instances means that monitoring logs and records are inconsistent. To ensure the availability of application information, you need to redirect the log and monitoring records to a centralized location where they can be permanently stored. This usually takes the form of logging and monitoring using a real-time event consumption service and storage in an unstructured database environment. This makes it easier to find and analyze, and also allows time-series comparisons (for example, to search for all days in which the app experiences 200 percent spikes in traffic).

Analysis of the reasons. It cannot be denied that microservice architectures are more complex than traditional monolithic architectures. This means that a problem that occurs in one service, for example, due to an error received by the user, can actually occur “further” in the application at the service caching layer. This makes it difficult to determine the root cause of the problem, which in turn increases the need for centralized logging and monitoring. When you centralize and merge all the logs of the various services in an application, you can see that when a user-visible error occurs, a service caching error also occurs, so you can continue debugging where you want.

Despite the additional operational complexity associated with microservices, IT organizations should evaluate their current operating systems to identify areas that need to be upgraded or replaced. The benefits of a microservices architecture are so clear in relation to business requirements that this applied approach will become the de facto architecture over the next five years.
 

4 – DevOps: Automating the Application Lifecycle

Today’s IT department is made up of distinct teams, each with responsibility for one part of the application lifecycle, including development, application build, QA, deployment, and operations. Unfortunately, in most IT organizations, each group has unique internal optimization processes. This leads to manual transfers from one group to another, often each group creates a completely new application
executable in a new runtime. Oftentimes, you have large gaps between when one group hands over their task to another and a second group takes over their task. These disparate IT structures create extremely long deployment times, which is a disaster in the new IT world, where rapid deployment and frequent updates are the norms.

DevOps is an attempt to break down the barriers between these IT groups, and a key element is replacing manual processes with automation. In DevOps, your goal is to minimize the time between developers writing code and putting it into production.

Implementing DevOps processes is not trivial. Most organizations start out in one of two ways:
Eliminates a well-known issue in the application lifecycle. For example, QA teams often try to get enough resources and therefore postpone testing while they discover servers, install and configure software, and test new code. This can mean long delays before they can provide quality feedback to developers. Some organizations are moving testing to the cloud, where QA can get resources faster. Many others go even further by making developers responsible for creating tests that validate any new code they write. This means that quality assessment can be done as part of the development process, rather than being done later with the testing team.

Estimate the overall lifecycle of an application using a technique called value chain mapping (VCM). VCM analyzes the entire lifecycle, identifying which teams are involved, what each team is doing, and how long each team’s task takes. Each of them then develops a plan to optimize their individual process, while all groups collaborate to identify methods to avoid manual handoff.

Most IT organizations embarking on a DevOps initiative end up finding that they need to learn the entire lifecycle 

of an application using the VCM process. Reason: Optimizing or automating the work of one group, without distracting others, does not lead to a noticeable acceleration in the overall delivery time of the application. Only by running VCM, optimizing each disparate storage, and then providing automation across teams, can an IT organization respond to the new “being a business” requirement.

The impact of DevOps can be quite dramatic. Many IT organizations are finding they can go from having trouble rolling out new releases in less than six weeks to being able to do it on a weekly or even more frequent basis. DevOps examples like Amazon can deploy code changes hundreds of times per hour. While this is impressive, most corporate IT organizations find weekly or even daily releases to be sufficient.
 

Testing: Changing the Role and Use of Quality Assurance (QA)

In most IT organizations, testing was done by a quality assurance team with few staff and insufficient funds. This effectively limits its testing to manual functional testing to ensure that the functions of the application work correctly. What’s more, most IT organizations postpone QC right before deployment, resulting in the last-minute rework of the application or, worse, deploying unsatisfactory code.

This approach may seem acceptable for “business support” applications, but it is completely unacceptable for “business” applications. We must not forget about the quality of the application. QA testing should be a core part of the development process and should be done early in the process so that problems can be identified and fixed before they cause panic or downtime.

The main way to accomplish this is to move testing to an earlier stage in the development lifecycle. The responsibility for developing functional tests rests with the developers, who create tests that test all the new functions they write.

However, this requires more than a simple handover of manual work from one group to another. You need to create an automated test runtime that you can invoke as soon as a new code is registered. When a developer checks in code (which includes new tests), the code repository should automatically run a set of functional tests that test all of the code associated with the piece of code the developer was working on.

This move to developer-driven functional testing allows QA teams to focus on three other aspects of testing that have traditionally been neglected but are increasingly important for business applications:

Integration testing:

This testing deals with end-to-end testing of the application and tests all parts of the application. Integration testing ensures that new code does not inadvertently damage existing features when new features are implemented. You can automate this testing to be performed during the initial code review, and it can be extremely useful as a way to avoid unexpected bugs in a production environment. Implementing an integration testing environment requires automated testing capability, dedicated testing resources, and investment in test development.

Client testing:

With the growing shift to mobile app access, it is critical to test apps on all of the most common mobile devices. Many IT organizations start with an informal “mobile lab” stocked with a selection of mobile phones for testing purposes, but quickly find that this is not enough because they usually cannot keep up with all the new devices coming out. Therefore, most organizations turn to a mobile testing service that provides a comprehensive external set of devices and contains enough resources to support large volumes of testing.

Performance/stress testing:

Cloud applications, by their very nature, tend to experience very volatile load volumes. Many applications fail or perform poorly at high volumes, either because some features cannot handle high traffic or because the application was never designed to provide elasticity – the ability to grow and shrink in response to changing volumes. In the business world, application failure or poor performance can hurt a company’s bottom line, which is why performance/load testing is critical for cloud applications.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe for updates