Micro frontends: my lessons learned
Natalia Venditto
Posted on December 18, 2020
Hello there!
For the past 6 months at work, I have been busy analyzing, experimenting, testing and sometimes suffering, while exploring the depths, the benefits, the cons of micro frontends as an architecture decision.
All I am going to describe, and all the opinions I will express, come from in depth-analysis and code that was shipped to production. I am not positioned in any place, with respect to micro frontends. Simply because it is my firm belief, that software architecture solutions are always good, when you have the problem they solve (hence, solution!) and always bad, when you don't and you're following trends for the mere wish to integrate innovations your software does not need.
The key is to use architecture patterns when really needed, and not to follow a trend or because the buzzword sounds good. There are very valid use cases for micro frontends, where they are probably the best solution.
This post is long, so grab a coffee, a tea or read it in parts.
When do micro frontends originate?
Micro frontends are the frontend analog to micro services in the backend.
Basically, a few years ago, with the explosion of digital transformation (enterprises, especially large organizations migrating to the web to operate their business), web software systems started growing larger and larger into huge monoliths that comprehended it all.
At some point, that architecture started bloating and smelling, and architects and developers came up with diverse approaches and strategies to break apart the code and the teams, and have dedicated release cycles.
Monoliths are not always necessarily bad, but sometimes they're hard to manage
When you have a very large system, and its pieces are very tightly coupled, meaning they're very inter-dependent, whatever change, upgrade, update, fix, etc you make to one part, however small it may be, requires you to release the whole monolith to a higher version. That affects also whatever depends on parts of your monolith, too, like sub-projects.
Even when using semantic versioning for dependencies, even when modularizing, and even if we were breaking teams apart to give them ownership of certain parts, developing an maintaining a large coupled system is a very challenging task.
Those are the fundamental reasons behind a micro-services architecture type: the wish to split apart large systems in a way that every component, and the team in charge, is independent in many areas
- technologies
- lifecycles
- processes
- operations
Especially in combination with event driven strategies, API first concepts and cloud platforms, micro-services have demonstrated to solve some of the afore mentioned problems, while introducing their own, due to the need for complex and well designed orchestration, for example.
Micro-services in the frontend
If splitting apart the backend was successful why not the frontend?
Actually the split in the backend, with dedicated teams to cover a feature or service, demanded we took our own measures in the frontend.
If we really wanted to make sure those components were completely independent, there was no other way around it, anyway. And that was the birth of micro frontends.
Micro applications
In all honestly, in enterprise, it is not something new. We have been doing this for years.
There are two ways in which we do this, we have a Single Page Application per hyperlink or url, or we embed a fully functional micro application in a page region, typically using an iframe.
The pros for this approach are
- the team developing the app can be fully independent, choosing their own technologies and cycles
- the app is fully encapsulated in the iframe, thus not interfering with the javascript or css of the host
The cons are
- if there is more than one instance of micro-app initializing in a page (or route) the impact on performance can be really high, and there is likely a duplication of dependencies, since shared assets like styles or fonts, cannot be shared between the host and app
- if the apps are integrated in the build system, there may be a chance to share certain dependencies, and even exercise some code-splitting/tree-shaking, however, but teams would lose independence, since they're bound to the cycles of the system where the build process is at
- routing can become very complex, especially if the navigation tree is very deep
- UX consolidation will be challenging. Aspects like localization, internationalization, and preservation of unified styles, will demand more efforts, alignments, or a very well documented style-guide
- if integrated via an iframe, you will need additional code to handle iframe resize. because iframe is completely encapsulated, you won't be able to share any assets
- if integrated via an iframe, additional measures for SEO and A11y will need to be taken
Does not change
- if you're consuming a bundled app, you would still need to update your main system dependency version and that little change demands a release, before you could see any updates made by an independent team to a micro app
Web Components
Another popular, yet less proven(*) way of integrating micro frontends, are via the web components W3C Standard. For those that are new to web development or never heard about them, web components are a web standard that allows the generation of custom elements (as in custom HTML elements) that via the DOM, have properties either custom or inherited from the HTMLElement (or whatever element class extended) parent.
It is possible to bootstrap a javascript framework on top of the Web Component, as well. There are many libraries that support extended functionality and also provide the necessary polyfills to make sure the Web Components are supported across browsers. As you can imagine, like with any W3C specification, it gets fully supported overtime, and customElement, Shadow DOM API, etc, as part of Web Components, are only somewhat there.
() *When I say less proven, I mean that there are less records of this pattern successfully used in large enterprise production systems
The pros of web components
- web components are more or less encapsulated depending on if Shadow DOM is used or not. That I have to say may be both a pro or a con, it depends on your use-case.
- when less encapsulated, web components can leverage common assets, which makes it easier to achieve a more integrated user experience, and consistent look and feel, especially for corporate identity
- having the ability to consume commons and shared code, reduces the amount of data shipped over the wire, and prevents duplicates, which enhances performance at runtime
- SEO and Accessibility are treated as they are for any HTML subpart, so no additional efforts, there
Cons of web components
- if you have to give support to very old browsers, expect having to ship additional code to polyfill them
- if the teams behind each component are independent and they can make their own choices, can you foresee the impact on performance of bootstrapping and initializing multiple frameworks in a page?
Does not change
- same as with micro apps
if you're consuming a bundled app, you would still need to update your main system dependency version and that little change demands a new build, release and deploy, before you could see any updates made by an independent team to the web component bundle
Orchestration, or bringing the whole micro parts together
It doesn't matter if your micro parts are standalone web components or micro applications, you will need to bring all that together, somehow.
That is only possible if you define a team in charge, exclusively, of what we will call 'the host'.
In large enterprise systems, the host is usually a CMS plus a large infrastructure, to support it. The team managing the host, should probably be in charge of the orchestration and composition, and make possible the following aspects
- configuration for initialization (especially authored!)
- routing
- communication between components
- state management
- providing commons/shared code
- availability of any other user-triggered event-based API's
Make it event-driven!
Does not matter if you're integrating micro frontends as micro apps in an iframe or web components, or even ES imports into a much larger application, a key part of preserving a good user experience is about making sure that all components can talk to each other. You can do that by using a publication/subscription mechanism strategy, like for example this one.
Publication/Subscription works in a very easy way. You publish a topic, and everyone listening or subscribed will react in any way you determine, to that publication. It is a great mechanism to use for micro frontends, because the publisher of the message or topic, does not need to know about the listeners. However, keep in mind that the subscribers, obviously do need to know the topic they need to subscribe to.
There are very, very light weight libraries to accomplish this and if we're bootstrapping a framework, it probably has its own preferred technology or recommended pattern.
But if you are bootstrapping different frameworks...
That's again when it gets messy! If you're bootstrapping different frameworks, all of which come with their own state management pattern and flavour, combining them may be especially challenging.
Each one will define its own store, and we will have to come up with ways to synchronise them. In my opinion, it defeats the purpose of a single source of truth, and I am not sure to buy into it. Even though we have effectively integrated Vuex and Redux store in very large applications, it is by no means an easy task.
Combining multiple state management patterns defeats the purpose of having a single source of truth, or unique store
This is something very important to keep in mind if you're stitching together multiple parts, and you're dealing with user authentication/authorization, etc.
Consistency in reactivity to browser native controls
This aspect is also essential. Having multiple micro parts behaving differently with respect to browser API's, like history, could confuse the user and lead to a very poor experience. It is very likely that each team should make sure every component posts their history state on state change.
Bookmarking may also become a challenge. Usually the user wants to bookmark a route, or the page in a specific state. If you're combining micro frontends in different technologies, this can become arduous.
Relying on Web API's will be helpful
Modern, evergreen browsers, offer a myriad of API's that you not only can but should leverage, in order to execute composition. Even when, again, not all browsers may support all API's, or some only offer partial support, they can be extremely useful in reducing development efforts and abstraction around composition, and using methods globally available.
Some of the most useful API's and interfaces for micro frontends composition, are these
- Mutation Observer
- Intersection Observer
- History
- Channel Messaging
- Drag and Drop
- Picture-in-Picture
- Push API(^)
- Service Workers and WebSockets
and of course, all the client-side storage mechanisms, and the Cache(^)
(^) These are still experimental and you can find them all here
The testing part
Testing a micro frontend component individually should be the responsibility of the team that owns it, that's clear. But if you're integrating it in a page, and again, you want it to communicate and sync state with other parts of that page, you also need to test that aggregation.
It is by no means easy to design automated tests for items that are in a black box to you. So expect that part to be, if not a problem, another challenge.
So you're not recommending micro frontends?
Again, no. That's not what I imply. I believe that micro frontends are the way to go, especially under these conditions:
- you have to integrate development by teams that work in a different context (external repos, different organizations, etc)
- each part has very different cycles (like feature A needs daily releases and feature B may get some maintenance), and you don't want to redeploy the whole system because of a stitch in a single (or several, yet minimal) feature(s)
And to make it possible you, need...
...at minimum
- a very solid event driven strategy
- a team in charge of the orchestration, routing, and commons
- a very well documented and defined style-guide
- a performance budget, and mechanisms in place to accelerate load (particularly perceived load).
And specifically in the enterprise world?
In my opinion, micro frontends architecture works extremely well for products.
Imagine you own an application with several different features. Let's imagine it's a search engine. You could break it down like this, into components, in the frontend:
- the form elements (input, button, checkbox, radio-button, etc)
- the atomic text elements (link, label, description)
- image
- teaser
- pagination
- lazy loading mechanism (for images and infinite scroll or load more)
You'd need a service in the backend to run the queries against an index, and return the results.
Of course there should be a (frontend) build system, a state management system or pub/sub module, a unified API/service to run queries from the frontend.
With them you could compose the search box, the facets or filters, the search results, the related results, the featured results, and even customized suggestions, combined with an analytics or target layer.
What I feel is the most important, is that everyone of those features map to a business capability or domain
You could easily form clusters and assign each one of those clusters to an independent team.
It is very likely you'd want the users to login, to save their queries. You'd have a team dealing with authentication and authorization, and they'd also use components from the other teams. (form elements, etc).
All those teams would still work with the guidance of a style system, to preserve UX and the identity of your product.
But what about the enterprise website context?
But enterprise sites are usually less of a product, and more of a composition of components to create web pages, to provide information of or acquire products and services.
Although you can very much assign each component to a different team that owns it, it is highly unlikely teams are configured in a way that allows for this. And that the business strategy backs up this split.
Usually teams working in enterprise sites are not that large that you could put together a fully cross-functional team in charge of smaller features. And even if they were, sometimes it would not make sense.
Oftentimes websites have a single business unit behind it. They advertise or commercialize a product or service, and all the components are tightly coupled, technically, and strategically.
There may be exceptions to the rule. And there are services that you can still break apart from the whole, and assign to independent teams to own, like we mentioned before, user management, or even the search mechanism as we suggested.
But it is more likely that enterprise platforms have a repository or library of components, used across many sites. And that the image component used in the search feature, is exactly the same than the one used in other places/components/modules/templates. All components are then equivalently important for the business stakeholder behind the site.
A really valid use case in enterprise
And that's the case when you have to integrate micro parts that are developed by remote teams. I don't want to use the term 'independent', because I still believe 100% independent is not going to help your UX and your performance. You will still want your corporate identity in the form of consistent UI/UX, to be preserved. You will still want state coherence and synchronization and communication between components of a page. You will still want to make sure you have a certain consolidation in the choice of technologies, to prevent collisions and performance degradation.
You will still want some decision matrix to rule and govern. As a Frontend Architect, concerned about performance, I wouldn't want to have Angular 9, 10 and 11, Vue 2 and 3 and React components initializing in a web page, requesting their myriad of accessories, and crashing the user experience (and the page!).
So the idea of having 100% independent, isolated teams, does not seem really feasible to me, in this context.
But the case of a team working in a really isolated feature, for sure merits independent release cycles, to avoid the need to rebuild and redeploy your whole monolith to pull a minor fix in a 3rd party component.
To use that commodity, you need to implement Module Federation. And to read what I have learned about Module Federation, you will need to wait until my next post.
Decisions, decisions...
These slides, from my talk at the HolyJS conference, offer a few graphics I created to aid with architecture decision making, based on the need for independent deploys or encapsulation, for example.
https://slides.com/anfibiacreativa/micro-frontends/fullscreen
being slides, 34, 35 and 52 to 56. If you're anywhere in the yellow areas, you may want to consider not using micro frontends.
Edit: You can compliment the reading, hearing me discuss the topic with John Papa, Ward Bell, Dan Wahlin and Craig Shoemaker at their podcast https://webrush.io/episodes/episode-113-micro-front-ends-with-natlia-venditto
Have a wonderful holiday season, folk!
Posted on December 18, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.