Traditional development methodologies encourage the ‘monolithic’ approach to application development. Building a single application that does everything required has been the modus operandi for a while. However, with the rise of modern systems based on the cloud or Platform-as-a-Service(PaaS) like Cloud Foundry and Docker, this approach might be lacking and unproductive. Microservices enable us to leverage the power of these new platforms to make the most of them.
The core concept behind Microservices is that certain bulky applications can be broken down into smaller pieces making them much easier to build, maintain and update. In contrast, the traditional ‘monolithic’ approach dictates that all components of an application must be developed as one big singular piece.
In this blog, we’ll be showcasing the drawbacks and limitations of this old method, how to start working with Microservices, the new challenges and issues that you’ll run into when moving to a Microservices approach and how to overcome them.
A monolithic application is a multi-tiered software which encapsulates all the working parts needed to achieve its purpose. The user interface, business logic and data access logic are usually a part of multiple code bases / projects and are generally developed on the same technology stack,
It is self-contained, independent and designed in such a way, that it is capable of performing every task needed for the overall function from start to end.
These days, monolithic applications are usually found in the following forms:
- Financial systems
- E-commerce platforms
- Travel platform
- Social platforms
- SaaS based services
- Word processors
To showcase the salient features of a monolithic application, let’s take the example of a cab booking system (Ex. Uber). The below diagram is a visual representation of this system in monolithic form:
As seen in the diagram above, all the various modules like Billing, Passenger & Driver Management, Payments etc. are all in the same program which interfaces with external components. All these modules share a common database (in this case MySQL). Everything from the UI to sending notifications is handled by this ‘monolith’.
There are several benefits to using this approach:
- This approach is familiar to and ingrained in most developers from years of experience.
- Most of our development tools and IDEs (like Eclipse) are more suitable and focused for this single-tiered approach.
- Testing of this kind of application is simpler because all modules run in the same process and use a common language/framework.
- Deployment of monoliths is very straightforward since it’s usually just one process running on one server/machine.
- Overall efficiency of a monolith is high since all dependencies and resources are in the same package.
- It can be scaled to handle load by simply deploying multiple instances behind a load balancer.
Looking at these benefits, it can be easy to label this as the preferred choice in architecture. But, it comes with many drawbacks too, namely:
- The complexity of this application grows over time as more modules are added. This results in a massive codebase which gets very difficult to maintain.
- A massive monolith can have a very long start-up time based on the resources and dependencies within it. Waiting several minutes for your application to start is a downside.
- Making minor changes to your application will require full recompilation and redeployment. This makes continuous deployment a major hassle.
- Individual modules cannot be scaled independently. For example, in this case, if the billing section of the monolith is unable to handle the load, it’s not possible to just deploy more billing modules. Multiple monoliths will have to be deployed for that one module to be scaled up.
- Reliability is also questionable. Since all modules run in the same process, one of them crashing could result in the whole monolith going down resulting in major downtime.
- Since all modules need to use a common framework or language, this can cause performance issues. For example, it might be more optimal to use Python for the Trip Management module but it can’t be done since all other modules are in Java.
Taking note of all these points, we can see that this approach may not be the best in all cases. To alleviate most, if not all, of these issues, the Microservices approach can be of great benefit.
The Microservices Approach
As mentioned earlier, the basic tenet of the Microservices approach is to split your application into a set of smaller, mostly independent but interlinked services.
A Microservice typically implements a set of distinct features or functionality, like Billing, Order Management, Payments, Notifications, etc. Each Microservice is a mini-application that executes a specific business logic along with various adapters to connect to other Microservices or databases. Microservices usually expose their feature set using an API,UI or message queue that’s utilized by clients or other Microservices.
Each Microservice is also usually housed in a separate VM or Docker container to maintain the isolation that is required.
The below diagram illustrates the same cab booking system but this time, using Microservices.
As seen above, each module is now independent, contains everything it needs and is accessible via a REST API it exposes. It’s important to note that having your Microservice expose a HTTP REST API is the preferred way, that’s not what you must do. If required, you can use other modes of communication like message queues.
There’s also a new component called the API Gateway which is responsible for routing requests from clients to the correct Microservice. The API Gateway will be explained in detail later.
Another big change is that there is no common database for every module. In the Microservices’ approach, each component or module has its own private database or Service-specific database as seen in the below diagram.
It is essential to have service-specific databases to benefit from the Microservices approach because it enables loose coupling. If multiple services have a common database that increases inter-dependencies and could result in a big failure if that common database goes down.
Another benefit of service-specific databases is that each service can use a different type of database that suits its needs. For example, the Passenger management service may use MySQL but the Notifications service can use Redis or CouchDB.
This approach will often result in duplication of data across these databases but that’s just another overhead that will have to be taken into account in this approach. Maintaning sync between data across Microservices will be explained later in the ‘Event-Driven Architecture’ section.
High availability of services
In order to make a service highly available, you simply have to deploy multiple instances of a service instance along with a load balancer. In this situation, you can have each service possess a unique service-specific database or you can have all service instances of one type bound to a common database.
Now, let’s have a look at what’s needed to actually build a set of Microservices. The first thing you need to decide is how you want your potential clients/users to connect to your Microservices. This is, of course, under the assumption that each of your Microservices exposes a set of fine-grained endpoints.
There are 2 possible ways to handle this client-Microservice interaction. For this section, we’ll be using the example of an e-commerce website.
1) Direct Client-to-Microservice communication
In this scenario, a client makes requests to the required Microservice directly using whatever APIs or endpoints it has exposed. Each Microservice, in return, would have a public URL accessible using this client. This URL maps to the Microservice’s load balancer, which distributes requests across the available instances.
Here, the client has to contact each Microservice separately. As such, the client must be programmed to know each Microservice’s individual communication mechanism. This can make the client program more bulky and tricky to maintain. Things like authentication, monitoring etc. also have to be handled by each Microservice individually increasing their complexity as well.
Updating a Microservice’s API will require you to update the client program too to maintain sync between them. Since each Microservice will have to be made public, this can also pose security risks.
2) API Gateway
Some of the issues in Direct Client-to-Microservice communication can be alleviated using an API Gateway. In simple terms, an API Gateway acts as the single point of entry into a Microservices-based system. Instead of a client sending requests to each Microservice, it would talk to the API Gateway. The API Gateway, in turn, would re-format and forward each requests to the appropriate Microservice and return it’s response to the client. Using this approach allows you to keep Microservices private and secure and only expose the API Gateway publicly.
The API gateway can also handle other tasks like authentication, monitoring, load balancing, caching, request shaping and management, and static response handling. Using an API gateway enables you to allow each Microservice to use the communication mechanism that is suited to it without making any changes to the client.
In the above diagram, the client sends a request to the API gateway asking for details of a product with a certain ID. It sends this as a standard HTTP request with no payload and it expects a JSON response from the API Gateway. The API Gateway determines that this requests requires data from 2 Microservices, each one with a different mode of communication.
Service 1 is programmed to receive HTTP requests and provide XML responses whereas Service 2 communicates using a Message Queue protocol (AMQP) and returns JSON responses asynchronously. The API Gateway sends the appropriate requests to both services, receives both responses, combines and converts them to JSON and then forwards it to the client. This model is called the ‘Reactive Programming Model’.
As such, complex requests can be easily handled using an API Gateway. The downsides of this approach is that extra overhead is needed for deploying and programming the API Gateway to handle these kinds of requests. This, however, reduces the complexity of client and Microservice programs.
In order to efficiently utilize an API Gateway, the following points need to be considered:
- Inter-Service Communication
- Partial Failures
- Service Discovery
In the monolithic approach, one module could easily talk to the other using in-process calls since they are all run as one process in the same machine. In the Microservices approach, this isn’t so straightforward as each service is run in a separate, isolated VM or container.
Microservices can communicate with each other in the following ways:
|Request/async response||Publish/async responses|
There doesn’t have to be a uniform, all-purpose and fixed way for every service to talk to the other. The different communication mechanisms can be mixed and matched across services as per the requirement or functionality similar to databases as mentioned before.
There are also a variety of different message formats. Services can use human readable, text-based formats such as JSON or XML. It’s easier for human operators to debug the data flow when these are used. Alternatively, they can use a binary format (which is more efficient) such as Avro or Protocol Buffers. Services will process this binary data faster but it’ll be much more difficult for a developer to parse through this data in case of errors. The following diagram illustrates inter-service communication using the different methods.
A partial failure refers to an event where one or more services in a Microservices deployment go into failed state. A partial failure can occur for many reasons:
- A service might not be able to respond in a timely way to a client’s request.
- A service might be down because of a failure or for maintenance.
- A service might be overloaded and responding extremely slowly to requests
Handling partial failures is a major aspect of deploying apps using the Microservices approach. Take, for example, the e-commerce application example we used earlier:
Just assume, that the recommendation service faces an outage for one of the reasons mentioned above. In this case, if the client app isn’t programmed to handle it properly, it could result in the user facing a generic error page when viewing a product’s details and thus, giving them a bad experience.
If partial failures are handled efficiently, this would not occur. In this case itself, if the recommendation service was down, instead of returning a full page error, the API gateway or client could instead keep the ‘Recommendations’ section blank or fill it with cached data and render the rest of the page as it should. This way, a user can view the other sections of the page without issue and continue with their interaction.
There are several strategies used to detect and handle partial failures. Some of them are:
- Fallback logic: All errors and failures must have a corresponding fallback or error handling mechanism to ensure that a service being down is handled properly and in time.
- Network Timeouts: Always use timeouts in inter-service or client-service communication. Ensure that no resource is kept in a waiting state for too long. If a service is taking too long to respond, mark it as an error and run the corresponding error handling code.
- Limit the number of outstanding requests: Keep a limit on the maximum number of requests a client can keep pending with a service. Once this threshold has been crossed, all subsequent requests must be marked as failed and handled accordingly. This ensures that no resources are wasted on making unnecessary failed attempts.
- Circuit-breaker pattern: This is the most complex but most efficient error handling mechanism. Clients/services making requests should keep track of how many repeated attempts have failed. Once this value reaches a threshold, the service should auto-fail all subsequent requests instead of attempting needlessly and wasting resources. Once the timeout period has expired, the service can re-attempt the request. A success will reset the error counter and timeout period. A failure will extend the timeout period even further.
When using Microservices, knowing the location and credentials for each service can be a hassle. When auto-scaling and auto-recovery of services is present, this makes it even more annoying as the IPs, ports and credentials of each of these services can vary.
To make locating these dynamically changing services easier, a service discovery system can be used. The key component in a service discovery system is the service registry. It is a separate database/record system which holds details about each and every service that is deployed. Clients/services look up details of services using this registry before connecting to them.
To enable service discovery, each service’s startup script must include code that registers it with the Service Registry server. Consequently, each service’s shutdown script must include code that de-registers itself. However, a service registry will also check the registered services for a heartbeat at a fixed interval. If a service has not been shut down gracefully or de-registered, the registry will mark it as DOWN after a certain number of failed attempts and not display it to querying clients/services.
Some example of available service registry systems are:
- Netflix Eureka
- Apache Zookeeper
- HashiCorp Consul
There are 2 patterns that clients/services can use for service discovery:
1) Client-side discovery
In this pattern, a client contacts the service registry directly, gets details about the Microservice it wishes to connect to and then connects to that Microservice directly. Similar to Direct Client-to-Microservice communication described earlier, the Microservices would all have to be directly accessible via the client which could pose security risks. Similarly, it would make the client app bulky as it would have to know how to connect to each service individually.
2) Server-side discovery
In this pattern, the client/service does not connect to the service registry directly. Instead, it does so via a load balancer which queries the registry, performs requested action on the correct Microservice and returns response to the client. The services and registry do not have to be publicly exposed in this scenario, making it more secure. The API Gateway described earlier can double up as the load balancer in this case. The client app, thus, becomes less bulky/complex since it only has to talk to the load balancer.
In the monolithic approach, there was one single database for all modules to use and therefore maintaining sync of data between modules wasn’t an issue. In the Microservices approach, things get a lot more complex since each service has its own private database which can be accessed/modified only through that service’s API/endpoint.
As per the above diagram, the Order service cannot directly modify the Customer Service’s database since it is private and it must go via the Customer Service’s API to do so. To solve this issue, the event-driven architecture can be used.
In this architecture, a Microservice publishes an event when something significant happens, such as when it updates a database record. Other Microservices subscribe to those events. When a Microservice receives an event it can update the relevant records in its own database, which might lead to more events being published and consumed by other services.
Events can be managed in the following ways:
- Using messaging queues like RabbitMQ or ZeroMQ.
- Using a database as middleware between services i.e. one service creates and event entry in the DB. Other service constantly queries this DB and acts when an event entry is detected.
The below set of illustrations shows how the event-driven architecture functions between two services using a message queue:
It’s worth noting that while event-driven architecture is a very efficient way of maintaining data consistency across services, it is by no means an ACID consistency model. Instead, it is more like the BASE consistency model where data is eventually consistent. That’s one compromise that must be made when moving to the Microservices approach.
Refactoring monoliths into Microservices
Now that we’ve covered the different concepts related to the Microservices approach, the next step would be to see how to convert an existing monolithic application to use Microservices. Simply converting each module to a Microservice might seem to be the preferred approach. However, that comes with its own problems. Instead it’s better to move a monolith into the Microservices pattern slowly and deliberately.
Step 1: Stop adding to the monolith
When a feature request come in to add more functionality to the monolith, do not add another module to it. Instead, develop this new function as a separate Microservice which interacts with the existing monolith. This will give rise to the following additional components:
Request Router: It sends requests corresponding to new functionality to the new service. Routes legacy requests to the monolith. The API Gateway can be used for this function.
Glue Code: The monolith must be modified slightly to be able to interact with the new Microservice. Similarly, the Microservice must be able to talk to the monolith. The code which enables this is called the Glue Code.
Step 2: Identify the modules and their layers
A typical monolith comprises of three layers.
- Presentation layer – Components that handle HTTP requests and implement either a (REST) API or an HTML-based web UI. In an application that has a sophisticated user interface, the presentation tier is often a substantial body of code.
- Business logic layer – Components that are the core of the application and implement the business rules.
- Data-access layer – Components that access infrastructure components such as databases and message brokers.
Once identified, ensure that each layer has code intended for its purpose, example presentation layer only consists of frontend / UI / CSS components. Business logic only contains the code translated from the business requirement(s) and data access layer only contains the parts required to talk to the DB. Once done, identify multiple modules that can be broken down into Microservices.
This step basically reduces the probability of parts of the monolith breaking down thereby facilitating a smoother transition to Microservices
Step 3: Convert existing modules into standalone Microservices
- Start with the modules that have the least dependencies and less chance of breaking the functionality.
- Create interfaces between the Monolith’s API and the new extracted service’s API.
- Run both old modules and new Microservices simultaneously initially.
- Make sure that there is proper compatibility between old modules and new Microservice.
- Deprecate the old module once all tests are successful.
- Make efficient use of service discovery & load balancers for optimal performance.
- What are microservices? (Opensource)
- NGINX Introduction to microservices
- Introduction to Messaging
- Microservices Architecture
Coined over a decade ago, Microservices is a buzzword in today’s technical arena. Microservices are known as micro web services based on SOAP and enables breaking of a large system…
- Messaging: What to choose and when
In a previous blog, we gave an overview of the different messaging protocols available to us (AMQP & JMS) and listed each one's benefits and issues. In this blog, we…
- Teradata Intelligent Memory (TIM)
Overview of Teradata Teradata is a RDBMS (Relational Database Management System). This system is solely based on off-the-shelf (readymade) symmetric multiprocessing (SMP) technology combined with communication networking, connecting SMP systems…
- Insight into SAP Fiori and Screen Personas
SAP Fiori is a newly launched user experience (UX) for SAP applications. It is a next-gen UX, based on modern design principles, that elevates the overall look and usability of…
- KAFKA-Druid Integration with Ingestion DIP Real Time Data
The following blog explains how we can leverage the power of Druid to ingest the DIP data into Druid (a high performance, column oriented, distributed data store), via Kafka Tranquility…