Can cloud-native computing eliminate tech debt

September 21, 2021

iauro Team

Contributing immensely to the global software solutions ecosystem. DevOps | Microservices | Microfrontend | DesignThinking LinkedIn
Cloud computing is a new paradigm for enterprise IT that touches all facts of modern technology, from application development to software architecture and the underlying infrastructure that keeps everything moving.

Thus, Cloud-native gave us the opportunity to put things in order in the house. We can take our newfangled Kubernetes-enhanced broom and sweep out all the dusty corners of our existing technology. Therefore, it would be logical to assume that the cloud will finally put an end to all this technical debt that has been accumulating over the years.

Perhaps logical, but not realistic. Tech debt is notoriously constant. What’s more, with a dollar to spend, any CIO would rather spend it on something new than spend it on fixing the problems of his predecessor.

However, there is reason for hope. Cloud-native is certainly not a magic broom, but its core techniques can really help you cut tech debt and deliver new software capabilities without accumulating more, or at least not as much debt as you would otherwise.

Ditching inherited technical debt

Getting your organization through technical debt rehabilitation so you never accumulate new debt is certainly part of the puzzle, but the more pressing issue is paying off existing, legacy technical debt.

Over the years, handy shortcuts, awkward coding, and all manner of chewing gum and bale bales in the infrastructure have created an impressive Gordian knot of complexity. As with the legendary knot, we may need to swing the sword to untie it.

Native cloud computing gives us a glimpse of such a sword: architectural refactoring based on domain-centric design. Domain driven design requires the separation of complex enterprise software tasks into separate business areas, each with a bounded context.

Such bounded contexts allow business concepts such as “customer” or “invoice” to be contextualized according to the needs of the business. For example, different departments of a large enterprise may have different (possibly overlapping) perceptions of the customer. Each of them will represent a separate bounded context.

Early forays into microservice architectures led to the proliferation of interconnected microservices, whose complexity limited their scalability. It soon became clear that their bounded context organization was key to managing this complexity as well as delivering microservices-based solutions at scale.

Thus, domain-centric design has become a part of cloud computing and, in addition, informs how we should approach the modernization of legacy assets in accordance with this new paradigm.

Again, the metaphor of the Gordian knot applies. The first step in solving the problem of legacy software, which demonstrates a high technical debt, is to apply architectural refactoring, thereby translating legacy (usually monolithic) architectures into modular elements with limited context.

In other words, you must introduce modularization according to business-driven bounded contexts to pave the way that reduces technical debt.

It is always the details that make the difference. It is possible to use this architectural refactoring in several different ways, depending on your business needs and problems:

You may find that existing code in a bounded context does not meet today’s requirements at all. In such cases, you can rewrite this code as microservices.

Legacy business logic can still make a difference, which you then migrate to microservices.
You can provide legacy modules as APIs, either because they already have useful APIs or because you’ve updated legacy code to expose it through the API.
You can create scripts to interact with legacy modules using Process Automation Bots (RPA).

Note that because you previously refactoring and modularizing legacy software into bounded contexts, each of the four approaches described above is now simpler than it would otherwise be.

In addition, future refactoring, if necessary, will also be easier because you have chosen a divide and conquer approach that shares your technical debt. It is more practical to pay off, say, four small debts over time than one large one.

This approach is also one of the best ways to reduce the vulnerability of RPA bots. Without limited context-sensitive architectural refactoring, such bots can fail if something changes somewhere in the application landscape. With appropriate separation, such failures will become more manageable and easier to fix.

Preventing New Tech Debt
Avoiding new tech debt is like asking your teenager to keep the room clean. Perhaps for a while it will be so, but then one thing leads to another, and again there is a mess.

They need more structure, am I right? At least in the cloud case, the structure does help.

Cloud computing includes a broad set of best practices that guide how best to design your software, set up your infrastructure, build and deploy your applications, and manage everything in a production environment.

Of course, if you follow all of these tips, you’re less likely to build up new tech debt – or at least not as quickly as dirty socks accumulate in your teen’s bedroom.

One of the most important examples: Infrastructure as Code (IaC) and its mantra “livestock, not pets”. The IaC principle states that you cannot communicate with servers during operation. Instead, programmatically rework the infrastructure as needed and redeploy.

The IaC principle can certainly limit additional technical debt in manufacturing as it provides a proactive approach to troubleshooting manufacturing problems. Just one problem: IaC is not far enough.

The problem with IaC is that you have to write various programs (or scripts, or recipes, or whatever the term du jour is). Then you have to test, manage and version these programs just like any other program, which means that technical debt can sneak in just like any other software.

Fortunately, cloud computing has gone a few steps beyond IaC, where each step is better than the last.

The first step is to expose the infrastructure using declarative configurations that specify how the infrastructure should be configured without specifying how to achieve those configurations.
Declarative approaches reduce technical debt versus IaC because there is less room for shortcuts or sloppiness, but these representations, which usually appear in YAML files or other JSON-based formats, are still quite code-like, especially for complex dynamic configurations. infrastructure.
Second step: GitOps. GitOps brings Git-centric practices and processes to software release and management.
GitOps complements declarative approaches to infrastructure configuration in many ways because it uses a declarative approach to software deployment. However, GitOps goes further by providing more structure to its processes – and the more structure, the less opportunity for technical debt to emerge.

The third step: however, is the current state of the art: intent-based computing.
Intent-based computing has three parts:

  • A declarative representation of the software in question (infrastructure, application code, or whatever) in the form of technical policies.
  • An abstraction of technical policies such as business policies that represent the business objectives of the baseline configurations.
  • A mechanism to ensure that the underlying software conforms to this business intent, not only through policy enforcement, but also on an ongoing basis, eliminating policy deviation.

In other words, intent computing takes a declarative configuration approach and GitOps and adds additional structure, essentially ensuring that the entire cloud environment is aligned with business intent over time.


Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe for updates