The approach to software development has evolved considerably over the last decade or so. Technology has certainly improved in this period and so is the hardware and computing capabilities. Along the same lines, the approach to software development has also seen leaps and bounds improvement in the way it is approached.
We use to have big monoliths often termed as “one man armies” which were capable of handling literally everything. The main advantages being:
- Fewer integration issues
- Lesser code as the components were co-located
However, the monoliths have drawn flak on account of the inherent restrictions like they become an unmanageable beast over a period of time and any change (small or large) leads to lots of ripple effects.
A shift of mindset in the Design and thereby the Implementation from Monoliths to Microservices has certainly helped in moving away from some of the inherent shortcomings in the Monoliths, but just by using the Microservices architecture, the deal is not done. So are there some best practices/guidelines that the Architects need to keep in the back of their minds? Let’s try and find out together.
Is everything alright?
The dictionary meaning of the word Consistency is “the quality of always behaving or performing in a similar way, or of always happening in a similar way”. It is the ability of a system to respond similarly to multiple interactions with it. In the world of software, there are primarily two such kinds of it: Strong consistency and Eventual consistency. Which one to choose while designing, majorly depends on the business requirements.
When we are talking about Microservices architecture we are talking about Choreography and or Orchestration primarily and by and large they go hand in hand in most of the projects, but irrespective of the pattern that is used, every microservice that needs to support both reads and writes have to be watchful of the way it will handle these two types of requests. Let’s dive deeper into the exact problem and the approaches that can be explored to find the solution.
What are we after
When a microservice is serving both reads and writes then it needs to be careful of the following:
- All the writes that are happening on the microservice are not slowing down the read requests landing on it
- With multiple writes, landing on the microservice, there are chances that read requests end up reading inconsistent data or partially committed data
- Since the microservice would be having its own database, any requirement that warrants to maintain the historical state of the data updates, would not be catered to by default
- It becomes difficult to scale different parts of the application if at all there’s a business need for it
- Sometimes what needs to be shown on the user interface is different from what is getting persisted as part of write requests landing on the system. This means that the application may have to read data from multiple sources (multiple microservices or multiple parts of the same service) to generate the complete context as the write requests have only persisted granular data in the database
What can be done
Now that we know the problem domain and the direction in which we are heading let’s move on in trying to find out what options we have:
Typically when we are designing any system, we primarily talk about the entities, their attributes, their state, etc. Down the line, we end up having class diagrams, entity-relationship diagrams that depict the relationship between these entities.
Conventionally in most of the Architectures, the prime concern relies on the entities i.e. how they are created, how they are modified, how they are accessed, etc. The events or commands that lead to the change in the state of the entity are often not given their due importance.
To understand this better, let’s take an example of any typical organization, there are employees which are located at different locations and they work for different departments. When an employee joins an organization, its record is created along with that, the company also stores the associated department, its work location, and his/her reporting manager. Over a period of time, if the employee goes through some changes in the department, location, or its own profile, the company keeps the latest information, i.e. the company generally doesn’t bother to keep track of what lead to the change in the Employee information.
The above hierarchy shows the Organization has four employees and how Employee#2, Employee#3, and Employee#4 are reporting to Employee#1
On the contrary, Event Sourcing takes a novel approach!
The primary concern in Event sourcing is the events or commands that lead to the generation of events. The focus relies on how the events would be persisted, how they would be used to generate the state of an entity, how they can be used as the source of truth for any requirement that is not apparent at the time of designing.
As can be seen above, the application instead of maintaining the current or the latest state of the employee is keeping track of all the events that are fired on the employee entity. In this way, the event store brings a unique thought process of maintaining the information, now how this can be used for extracting all that was straight away possible with entity store, let’s explore in the following section.
How it works
Entities are the core of any application, users interacting with the application ultimately end up issuing Commands on the Entities either directly or via some interface. These commands, in turn, end up firing Events. Let’s continue with the same example that we took before to understand this better.
In this, Employee is an Entity, the activity of adding, removing, or updating an employee can be considered as Commands and the corresponding reaction that they will generate like Employee Added, Employee Deleted, or Employee Updated can be considered as Events. There is general convention while naming the events, and that is they are normally named in the ‘Past’ tense as they represent what happened in the application at any point in time.
So to begin with, we start with an empty Organization, and then
- We added an Employee to it, which leads to the triggering of the Employee Added event, next up
- We added another Employee, which leads to the triggering of another Employee Added event and finally
- We updated the first Employee and that lead to the triggering of the Employee Updated event
The final state of the Organization can be determined by playing all the events in a sequential manner however if we want to generate the state of the Organization at any point in time that too is feasible as the Event Store has captured all the events in the order they were generated in the application along with a timestamp.
So far we have talked about positive flows where-in the application received events, processed and persisted them in the event store, what if something goes wrong during the command execution, well for application errors, we can have a separate log of all the error events that failed to got process.
Now the problem with this kind of storage mechanism is that over a period of time the number of commands on an entity can go out of being just a handful and drawing the final state or even the intermediate state would mean playing all the events in the sequence every time the application needs to send back that state to the user requesting it. Is there anything that can be done, yes incomes CQRS!
CQRS stands for Command and Query Responsibility Segregation, what this essentially means is separating the Commands that generate events from the Read requests that are landing on the application. The primary benefit that the application draws from such a pattern is that the application gets freed from handling both read and write concerns together. Along with Event sourcing, the CQRS pattern decouples the reads and writes.
In our previous example, every time a Command executes, an event will be augmented in the event store. This activity would also trigger an event that all the external systems, applications, and microservices within the same application can subscribe to and do whatever they are intended to perform.
As we can see in the above illustration, a series of events are being recorded in the order they are generated within the application in an event store. In this way, the event store is acting as a source of truth for all that is happening within the application. One more thing apart from just persisting the events, the event store also notifies other systems, external applications, and or microservices that are part of the same ecosystem.
As is visible from above, these events upon subscription are consumed by various views/microservices, and they proactively format the data and create a presentable view as is desired by the User interface, so that any runtime calculations can be avoided, thereby making the user experience more fluent.
Apart from this, any need from business to collect the data for auditing and tracking purposes can also be achieved by subscribing to these events.
There are certain advantages that we get out of this one of them being that at any point in time we lose any view due to some x, y, z reasons, we can recreate that in no time by replaying all those events that are persisted in the event store.
Along with this, sometimes the business is not clear upfront of all the User facing requirements and may come up with new screens to be incorporated in mid of the show, all that the application development teams need to do is to replay the events from the event store and prepare the desired view that would satisfy the screen demands.
Can this be de facto?
Well, there are some considerations before attempting this kind of pattern in an application, the foremost being that this system would be Eventually consistent which may or may not be in alignment with what the business demands. If the users can tolerate a small delay in seeing the screens taking effect of their actions, then we can use this, else we should fallback to the Strongly consistent approach.
Another factor that we should keep in mind is that adopting this pattern is a change in the mindset from being an entity store to the event store. The development teams have to come out of their basic CRUD mindset and start thinking about events being the source of everything within the application.
Additionally, over a period of time, because the events in the event store are immutable, the event store may end up getting bigger and bigger, so the Architects and developers have to keep this in the back of their mind of creating some kind of periodic snapshots which can be replayed to regenerate the state of the entity as and when needed.
Extra care should also be taken to handle events coming from distributed instances of the application to the event store. Generally having a timestamp or some kind of unique identifier helps in order to determine the correct sequence of events in the event store while replaying them to generate the current state of the entity.