Micro Frontend Architecture: Replacing a monolith from the inside out
Zack Jackson
Posted on November 12, 2019
How to modernize a legacy application with micro-frontend technology
Zack Jackson
This article is part of a series on micro-frontend applications and techniques for managing them.
The Problem
Let us assume there’s a Monolithic codebase. The monolith uses one of the backend templating engines or systems, jQuery, and has no real considerations for frontend — or worse, comes from a time before SPAs existed. Maybe it has some asset pipeline like Ruby on Rails does. In that case, you might have backend variables inside javascript files — like .js.erb files, or AEM Fragments. A coupled, spaghetti codebase that feels next to impossible to modernize.
You want to stop writing frontend code inside this monolith and move to more JavaScript oriented ecosystem, but how?
Most companies cannot afford (will not accept) the engineering downtime of a “tools down” rewrite. Features need to progress under active development. Those features are undoubtedly getting harder to release at the same speed.
The monolith needs should be broken down into smaller pieces in a progressive, transitional way. It cannot interrupt business
However, decoupling a monolith can be tricky, primarily when new API’s to support a JavaScript application need to undergo planning or development to support the frontend migration.
Iterative frontend development, micro frontend (MFE) implementation, and team autonomy are blocked while waiting for the necessary APIs to undergo development or finish and go into a release cycle. FALSE, you can decouple the frontend in parallel to backend
Zack Jackson — ScriptedAlchemy
Here is one solution to decouple a frontend and port it to a standalone MFE complete with SSR. This method allows a team to do so without waiting for backend API’s to be abstracted and decoupled into microservices or even consumable API’s within the monolith. Replace the monolith from the inside out
The Blockers
Micro frontends typically have these two necessary dependencies.
1) Authentication
2) Data to feed the application, both in the browser and during server-side rendering (SSR)
In my experience, user authentication seems always to be the hardest part to separate from a monolithic backend. Regardless of if the legacy system is Rails, Java, .Net, etc.
There are other challenging aspects to micro frontend architecture, which will be elaborated on in a future series. So stay tuned!
Use the Monolith as a Layout Engine
There are a couple of different architectural specifications for MFE platform design. This article will focus on an adapted specification which is popular amongst backend microservices — LOSA (Lots Of Small Applications)architectureis a good option for “inside-out” migrations.
LOSA Request/Response flow via the monolith. Image credit to Robert Arkwright
LOSA applications (micro frontends in general) are standalone Node.js services capable of server-side rendering a part or fragment of a webpage that the service as mentioned earlier is responsible for servicing. A page can be made up of multiple LOSA services. These apps/or micro-frontends are built and deployed to a container which is independent. Operating in a standalone manner
The same web page, composed three different ways, demonstrating an incremental migration path. Starting as a monolith rendered page, transitioning to LOSA micro frontends and finally ending up as a micro frontend verticle, completely replacing the monolith. Image Credit Robert Arkwright
The monolith remains responsible for handling an HTTP request object, sending the final response to the client. Micro frontends can stay behind a firewall within the cluster — only available directly to the legacy system until such a time when an API gateway & user authentication can be decoupled (or at least turned into an API endpoint). You don’t need many changes to prepare these frontends for their post-monolith life.
The Render Flow
Below is modeled example of what a request/response could when up resembling.
First, a request is made:
GET/POST 'https://MFEwebsite.com/parts/header?format=json
Rendering a page can require a variety of data, any “missing” information that cannot yet be queried from a decoupled endpoint can be sent to_the_MFE (_micro frontend) as props_during the request. Heres what the MFE does when a request is made, the request is passed through a piece of middleware which is responsible for rendering the react application, a query is made to any necessary APIs that are decoupled and its response is sent back to it as props. These props will make up window.INITIAL_STATE
The code
If you are in need of some inspiration on how to implement some of these template functions or filters, then Hypernova is worth looking at. I haven’t used Hypernova, always opting to build my own. I have implemented similar mechanisms into Rails, Node, and PHP backends. Due to the proprietary nature of various backend platforms, I’ll be using Hypernova’s examples to convey a rudimentary concept.
Here is what an MFE rendering endpoint would look like in express:
The request from another system, in this case — the monolith
GET/POST 'https://MFEwebsite.com/parts/header?format=json
{
html: '<div> ... </div>',
css: '/static/header.3042u3298423.css',
js: '/static/header.idhf93hf23iu.js',
initial_state: {items:[...]}
}
The middleware that handles the response:
export function exampleRenderAPIware(req, res) {
const renderedMarkup = renderHTMLpage(
req,
this.index,
intial_state,
);
asyncRender.then(() => {
const responseObject = {
html: renderedMarkup,
initial_state,
js: jsResource,
css: cssResource,
};
res.status(200).end(JSON.stringify(responseObject));
});
}
Controllers making these initial POST requests should handle the responses, placing JS and CSS into the right locations. And finally rendering the react app into the appropriate spot in the legacy template. Here’s an example of what that legacy template looks like now. The assets, usually handled by some other controller in your monolith, will be responsible for injecting those scripts and styles into what’s left on the legacy header and bottom of the body tag. Remember, the monolith still serves as the layout engine. We are replacing parts, and adding new features in a React SSR way. Eventually, these LOSA apps could be stitched back together under one MFE or with Webpack black magic I’m developing. Known as webpack-external-import
What about migrating from template data to a new API?
When a new API is decoupled and brought online, what can be expected in migration?
When the monolith is providing the data to an MFE, express.js accesses this information off the HTTP request body. Now express would need to asynchronously fetch from the API. Data formats might have changed, but React still receives props. Underwhelmingly straightforward.
Performance
Compared to older monoliths, the new LOSA (lots of small applications) architecture wasn't performant enough, taking 400–600ms for a part of the page to render. We used Async Worker structures, meaning that instead of one SSR’d app, we could ask multiple services to SSR different parts of the application. This made it very hard to bring production offline because a “production failure” meant maybe losing a sidebar or footer for 10 mins till it was fixed. Separation of concerns at its finest.
Here’s what I mean by LOSA async workers. There were many node services, each responsible for rendering a component/components
Controllers (the grey gear) powering a legacy backend view can divert the view data to a post request instead of to a backend templating engine. Recycling data means there isn’t much effort required on the backend to support these mechanics. Avoiding major modifications will free up most of backend engineering to focus on decoupling data providers, while the frontend can progress independently. Since the view data was posted to an external react service, the response to that POST, which contains markup — is then passed to the backend templating engine, along with stylesheet, initial state, and CSS URLs. The templating engine now just renders the response from the POST request, thus decoupling your view or part of it from the legacy monolith.
React Render Time
React was slow!! SSR just isn’t fast — so our new react solution LOSA architecture wasn’t performant enough to be viable. Our solution, Fragment caching inside react
* Yellow: No React fragment caching — end-to-end(+-400ms)
* Dark purple: With React fragment caching — End-to-end (+-150ms)
* Orange: The fully optimized architecture (+-20ms)
* Green(at bottom of data points): native fragment cache from the backend
I will need to write another article to detail the whole process around creating a fully optimized react server (Sorry Anton Korzunov ). The Graphana data shows that we at least doubled our render performance, the round robin-times were still really slow. While internally, react was able to render very quickly — the end-to-end times were not as performant as we hoped. At least 150ms. As you can see, and as I’ll elaborate on in the next article — I can compete with fragment backend fragment caching.
Render time vs round-robin time
Render times are part of the challenge but even after implementing fragment caching inside React. I was disappointed to see that while our internal render times within Node.js were blazing fast (around 20ms). The whole end to end trip still took 140–200ms.
What are the bottlenecks
1 JSON size, specifically initial application state. Send the bare minimum state needed to render the page. Stop dumping so much stringified state into the initial render. Send enough state so React is able to re-hydrate and maybe some extra state for making above the fold components interactive immediately.
2 The number of DOM nodes to render — stop wrapping your code in useless divs, just to put a class on it. Take advantage of the semantic nature of HTML and the cascading effects of CSS. You end up writing way less markup, thus causing way less React.createComponent functions to be generated.
3 Garbage collection — more details will follow in a followup article series
4 Only as fast as the data providers. — Make use of Redis caches in the middle tier. If you throw the “cache invalidation is hard” argument at me then look at event sourcing. Even better, tackle the problem with CQRS and async workers on the writes and reads.
5 HTTP overhead between monolith and MFE — gRPC, CQRS, UDP, Protobuf. This communication between the monolith and your MFE should be happening internally over kubernetes networks. POST is slow, but works. When you hit the problem, handle it accordingly.
How I outperformed a backend rendering
Templatization, fragment caching, and gRPC/CQRS, removing bloat from the initial state JSON. React is slow(er) on the server. It's easy to forget, no abstraction is faster, only less slow.
What about scale?
Any good solution needs to be cost-effective at scale. Operating costs grow to astronomical amounts. Money talks, bad performance costs. I build cheap solutions at scale. Heres ways where perf can cost you:
1) paying for expensive third-party services to shoulder the load
2) paying for more / bigger containers
3) missed revenue due to bad performance
4) The monolith usually causes release cycles or deployment traffic jam’s as two branches cannot going into master at the same time.
5) Developers can move faster in lower-risk environments, business is able to deliver new ideas to market and roll back problematic areas — a team that can move fast and efficiently is a cost-effective enabler to business.
The result
Traffic: 10 million (renders) / day
Resource Allocations:
* Instances: 5
* RAM: 100mi (100 megs of ram)
* CPU: 100 (single-core)
* Max CPU usage threshold: 65%
* Response time: 20–25ms
* DOM Complexity: High
* 95% reduction in response time.
* Green: Backend render times
* Blue: React With Fragment Caching and state optimization.
My single-threaded javascript application was faster than a multi-threaded backend system with a fully-fledged fragment cache.
Follow me on twitter :)
DM me if you are in need more information or a consultation.
In the next series — I shall elaborate on the performance aspect that was briefly mentioned towards the end of this article.
Posted on November 12, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 12, 2019