Small-scale microservices in the wild (1): Anachronistic monoliths
Kristian R.
Posted on September 18, 2017
Microservices are all over these days. So are frameworks to build different “microservices” from scratch, and so are infrastructure and runtime components to help getting real-life application environments built out of microservices. Using small components to build larger, more complex applications seems so incredibly much the de-facto standard of how to work these days that we tend to apply this pattern to each and every problem at hand. Depending upon your problem, this might or might not be a Good Thing(TM). We’ve been down the route of cutting more complex applications into smaller components for the last couple of years, and there are a couple of insights to be gained from this. I’d like to spend some time pondering these things and looking at where we are now.
In order to not be too boring to the few people who will be reading through this, as well as in order to not be trying to write too much in one batch and not getting it done after all, I’ll try making this a short series of write-ups. Hope it will somehow work out.
Solving problems
In general, technical solutions should, well, be actual solutions to certain real-world problems, no matter whether in an actual business domain or in a technical domain solving real business problems. Microservices, most of the time, seem to fall into the latter category – a technical solution to solving certain technical problems one might encounter when building non-trivial applications. Not too much a surprise, those solutions do address some problems better than others, and there might be requirements or demands where a certain solution could make things considerably worse. Microservices are no exception here, as to be seen in two scenarios to optimize an existing IT environment.
Speed up development.
In most cases, one main pain point is the notoriously limited amount of developer resources available, assuming most of us aren’t Google or Amazon and most of our teams are considerably smaller. Simple and rude: You might want a monolith. Period. This sounds pretty much anachronistic. However, splitting up your system into microservices these days usually means composing an application from several smaller services communicating by means of HTTP and JSON. This has interesting aspects from a development point of view:
You will spend a good deal of brain (as well as CPU) cycles on things such as (de)serialization and transfer of data. You are supposed to translate your business objects and your data structures into some language-agnostic representation that can safely be sent across some sort of wire. In a monolithic applications where all or at least the vast majority of communication happens locally, this is of little to no relevance. You live inside a shared common data model and a common application context so there’s no “outer world” involved.
You will programmatically have to add reliability and resilience to your application by dealing with the fact that required systems randomly could respond too slow or completely fail. Again, working in a single application context, this is of no relevance at all if the application tends to just work or fail as one.
You also need to consider scenarios for retransmitting certain batches of data as well as getting things such as distributed transaction handling “right” in at least a rudimentary way – whatever means “right” for your application. Handling transactions isn’t always easy across one single application (if it involves data storage and business processes that might have been started already); spanning a single transaction across multiple applications doesn’t make it much easier.
As soon as any interface you exposed is used by any client, it will become virtually “immutable” as (assuming loosely coupled systems) it will be hard to impossible to track down all users of that certain system, so removing “old” methods always is more risky than, in example, removing deprecated API in a large Java project which will fail to build or even deploy as soon as you remove code still in use by certain specialized parts of your system.
You will have more boilerplate to deal with for building various distributable modules in the infrastructure they require, configurability for various dependencies (like other services) and these things. In a monolith, you will do these things once for the whole application if you do it right.
There are for sure more than these but I see them to be the most critical ones if following such an approach at least in our environment. It might be different in yours, though.
Speed up Ops, too?
Development is all interesting and good but usually doesn’t earn us any money; in most cases, we need a running application for that. And, much alike development, operating and running applications gets more difficult if working with a distributed system, compared to a single monolith or a single application in some application server:
Suddenly you will have to deal about various services that need to be up and running for the whole system to flawlessly work. You will most likely have to take care of services (re)starting, maybe in the right order, too.
Most operational aspects, such as deployment, logging, monitoring, debugging or auditing get more complex because there are more components involved. You most likely will need considerable efforts, even conceptually, to maintain a bunch of interdepending modules where, given a monolith, you would just have to work with a single application and maybe its logfiles and monitoring hooks.
What’s actually an advantage in some use cases comes as a disadvantage in straightforward operations: You will have to consider scalabiltiy and performance not just for one application but for many. If you encounter bottlenecks, you will potentially have a much harder time finding which component is critical and how to scale it in order to work well.
Also from a system security point of view, things might become … interesting. This, obviously, includes HTTP/REST APIs exposed and available to clients – are they always appropriately secured, audited, encrypted? Or is SSL termination rather just something happening at an external gateway while local network traffic doesn’t use HTTPS at all? Are services capable of detecting (and dealing with) man-in-the-middle attacks? Are applications in any way able to validate requests and responses to figure out where they are coming from and whether they are actually reliable? Or, even more simple, how can we avoid internal services talking to each other using standard credentials stored in configuration files (and maybe worst of all in some git repository, available to the whole team)? Do we ensure “production” systems only to be able to talk to each other but not to be reachable from, say, development or testing environments?
So why bother after all?
Being confronted with all these points myself, a reasoning to defend microservices wouldn’t be all too hard: There established are solutions to most of these problems. There are orchestration and deployment infrastructures to handle vast loads of servers and services ranging from more “traditional” tools such as puppet or chef to large-scale environments such as kubernetes. There are tools such as zabbix, nagios, elastic stack and a plethora of others, in most cases open-source, freely available and just waiting for you to solve your problems with. Plus, for each of these reasons you might find at least one good reason to give up on monolithic structures in favor of a more modular microservices architecture.
That’s actually okay. I don’t really see a monolith a very desirable approach to application architecture anymore, either. But, as always: Your mileage might var. If there’s one thing to take away here, then it should be: Careful why to cut your system small. It will end up with a more distributed system, at least initially increase maintaineance effort and end up adding a load of accidental complexity to your system that might just fail. So be critical about the drawbacks and downsides of such an approach, focus on business requirements and, most of all:
Come up with good reasons why to build a modular, distributed system instead of a monolith. Come up, too, with a good idea of how “small” or “large” is a good size for a service in your environment. These reasons might vary depending upon your busines domain and company size, so I’ll soon spend some time pondering our reasons to still follow this path. Stay tuned… And feel free to add feedback and insights of yours. Way more than reporting our ideas, I'd like to see experiences others have made in this field...
(originally posted on dm.zimmer428.net)
Posted on September 18, 2017
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.