Cloud Native Principles: Reading Through the Clouds

June 3, 2020

iauro Team

Contributing immensely to the global software solutions ecosystem. DevOps | Microservices | Microfrontend | DesignThinking LinkedIn
In the last article we discussed the evolutionary process that led the infrastructure process to  Cloud Native Infrastructure. With the change in client needs and market relevance, the organizations learned to not only accommodate this evolution but also benefit from it.  The technology agnostic Native Infrastructure is currently helping organizations to ensure maximum productivity even through remote work. However, there might be some who are still skeptical about this infrastructure, and rightfully so. One should be at liberty of choosing one’s own infrastructure. That is why, the next article in this series will talk about the principles that form the core of this infrastructure. This will help throw light on its benefits and will put forth strong arguments in its favor. Let us begin. The Core Principles The Native Infrastructure was introduced to serve both infrastructure and platform as a service so that any organization can run business without worrying about the overheads of server operation and maintenance. Therefore, the principles of this architecture are woven around the same idea.
  • Agnosticity: Of course the first principle had to be the ability to work with a cloud agnostic environment. The cloud should support concurrently running nodes that can be logged for their session and configuration and can operate from different systems. This was necessary to ensure that clouds don’t fall back to the patterns Platform as a Service (PaaS) or Infrastructure as a service (IaaS). In other words, clouds should be able to work as an abstract layer of infrastructure that just supplies all the necessary resources to the organization and the organization just has to worry about its internal business processes.
  • Decomposed Software: This is one of the important principles, to have the true cloud native application. Software needs to be loosely coupled services (Microservices). These services are built around business capabilities and are independently deployable by a fully automated deployment pipeline. Independant lifecycle management per microservice is the key for adopting this principle.
  • Resiliency: According to Murphy’s law — “Anything that can fail will fail”. When we apply this to software systems, In a distributed system, failures will happen. Hardware can fail. The network can have transient failures. The goal of resiliency is to return the application to a fully functioning state following a failure. One of the main ways to offer resiliency through High availability (HA) and Disaster Recovery(DR).  HA and DR can be  achieved through multi-node cluster, multi-region deployments.
  • Flexibility: Next logical thought about such infrastructure should be its flexibility. Undoubtedly, one would expect it to be scalable. Clouds need to be flexible enough to scale up or down based on the load they are expecting to face. Having IaaS as one of the underlying infra, they can curate policies for scaling purposes. APIs can be used to control the machine images as per requirement.
  • Externalized Configuration: Decoupling the configuration in the application and considering it as an artifact in versioned manner helps to eliminate the operational errors. Clouds are primarily meant to make way for automation. This means that they should be able to serve with minimal (or negligible for that matter) human intervention. The management and configuration of the services should not be an overhead for the organization being served, rather the cloud itself. That itself essentially justifies Infrastructure, Platform and even Software as a Service.
The principles that Cloud Native stands on are fabricated keeping in mind the necessities of new-age applications and businesses. Automation, Artificial Intelligence, Internet of Things etc. are all to be facilitated by this infrastructure (or newer versions of it). It only makes sense for organizations to incorporate and enjoy it if they already haven’t. In the next leg of this article series, we will talk about the benefits of this infrastructure and also try to debunk some common misconceptions that are present about it. Stay tuned.


Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe for updates