An introduction to Microservices
An introduction to Microservices
In a world driven by digital transformations, success is defined by new age technologies and intuitive IT ecosystems. Such infrastructure cannot be created on the back of legacy systems and monolithic applications. So how do we go about building it? Enter microservices.
This article will cover the following points:
What exactly is a microservices architecture?
How useful is it?
When to implement a microservices architecture.
Deployment: Plain or docker?
What is a microservices architecture, and how useful is it?
A slight variation of SOA (service oriented architecture), this is an architectural style that operates on loosely coupled, independent services. Microservices is already making the rounds with developers today, and for good reason. When a monolithic application is divided into smaller, autonomous micro applications, it falls under microservices. There are various business benefits that this approach offers, namely:
– Flexibility: No dependency on development architecture.Each microservice can be tuned for performance separately
– Reliability: Faster performance, more features and quicker problem resolution time
– Scalability: Can be scaled independently. They can also be deployed according to the load that each microservice has, and thus be balanced accordingly
– Speed: Development speed is much faster, since individual services are deployed instead of everything at once
– Maintenance: As each module serves a specific purpose, the maintenance process becomes much simpler, with respect to both, code and architecture
Implementing a microservices architecture successfully
Whether you are looking at migrating pre-existing monolithic architecture to a microservice way of functioning, or building new infrastructure from scratch, understanding the ‘when’ of this process is much more important than the actual ‘how’. Otherwise, there are very good chances of you landing up in a mess, if you try to do everything at once. The size, scope and capabilities need to be decided beforehand, and it is a good idea to begin with relatively broad service boundaries and then refining the structure with time. Without these parameters in place, implementation becomes too complex a process.
Ask yourself a few key questions before embarking on this journey: Is your organization mature enough to adopt a microservices app infrastructure? How strong is your data management team? Will the data be centralized, or decentralized? What core will you focus on (PaaS or container management)? How many domain experts can help this transition? An integration of independent services will not sustain without proper communication between these services, and this is why setting up working communication structures is imperative to build intuitive APIs. There are various sub-structures to choose from, when building these communication patterns, and as an organization, you’ll have to figure what works best for you: a point-to-point structure, API-Gateway style structure or message broker structure.
Deployment: Plain or docker
After the groundwork has been laid, will you move on to the actual deployment of these services.
Unlike monolithic service deployment that is straight forward and focused on running everything at the same time, different microservices have different configurations and scaling requirements.
There are three practical approaches in maximum use, and based on the business benefits you want to achieve, you’ll have to pick and choose from the following:
- Deployment of microservices on separate virtual machines: Each service gets its own virtual machine, and this is the easiest to understand for organizations migrating to this architecture for the first time. Scaling happens by customizing the size of the machine, but the process is expensive, not fault tolerant and slow. This is not, thus, a very popular approach right now.
2. Deployment of microservices as a docker (container) on Kubernetes: Containers are more efficient than virtual machines, since they reduce the need of entire operating systems. Docker has been very popular within developers since the last decade now, since it eradicates the need for duplicate architectures. The next step is to orchestrate various containers so that they can share resources and scale independently. This will depend on the size of your Kubernetes cluster, but this again may turn out to be an expensive option, as there’s a possibility of being charged for unused resources.
3. Deployment of microservices as a serverless function: This approach needs the least operational effort, and makes use of AWS Lambda/Google Cloud/Azure functions while combining it with API Gateway services. While this does promise much control over the execution environment, it helps developers focus on code and eliminates the need to spend time on low level infrastructure. It also the most flexible option of the lot, as it automatically scales services, and because it is public cloud based, you only pay for what you use.
Microservices has only just begun making its presence felt in the IT space, but in the coming years, it is going to become more of a necessity than a choice. The need to evolve with the times is more real than ever, and the faster organizations realize this, the higher their chances to scale up will be!