The Implementation Details of Preply’s Design System Visual Coverage (part II)
Stefano Magni
Posted on October 3, 2024
This article has been originally posted on Preply's engineering blog.
Implementing the coverage algorithm, working on the process that resulted in having the dashboard, working with teams to increase the coverage, implementing it for the App, etc., has required months of work. In this (very long) article, we will discuss all the project’s technical details. We also made the visual coverage code available on GitHub at preply/design-system-visual-coverage, so you can use it for your product too.
If you are new to the topic, please read the first non-technical article explaining the whys and hows behind the design system visual coverage project.
Web implementation
The visual coverage performs two main operations:
- Parsing the DOM to gather all the elements and their boundaries
- Creating a bitmap out of the element boundaries and calculating the coverage
As explained in the first article, we measure the visual coverage on users’ devices. Browsers provide all the APIs we need to avoid getting into the user’s UX: requestIdleCallback and Web Workers. The following graph shows how we decided to run the calculation.
This timeline graph shows when and how we want to retrieve all the DOM element properties (when the user is idle) and that the heavy pixel count happens in a Web Worker.
Some of our pages are SSR’ed; others are part of a regular SPA; we are exploring using RSC… But from a visual coverage perspective, it doesn’t matter. We measure the visual coverage on the client, on all the user’s devices every five minutes (at .00, .05, .10, and so on), across all Preply.com’s pages, regardless of what the users are doing (scrolling the page or taking an online lesson). At the moment, we are collecting 150K-350 K events per day. This high number of events guarantees that every single Preply.com page is tracked.
Preparatory steps
Step 1: We updated all the Design System components to include a dedicated DOM attribute: data-preply-ds-component, whose value is the name of the React component. This allows us to distinguish Design System DOM elements from the other ones. Also, this detaches the coverage script from the React.js nature of the website, and dedicated build and deployment steps were not needed.
Inspecting Preply.com’ pages reveals all the design system component data attributes.
Step 2: Create the visual coverage containers. In our case, this translated into setting the data-preply-ds-coverage attribute to all the pages, assigning them a name and the team the page belongs to. In the JSON, the page is generically called component because the same logic applies to smaller components (like the unified header, the calendars, the chat widget, etc.) and allows splitting the responsibility of the same page across different teams.
Inspecting Preply.com’ pages reveals all the design system coverage container data attributes.
Using a bitmap to recreate the page’s pixels
Counting the number of colored pixels in the DOM is a slow operation, even if you work with a canvas (we haven’t tried OffscreenCanvas, though). So, we thought of a way to “recreate” the page with simple data we could mutate and aggregate quickly. An array of colors (technically speaking: a bitmap) would be perfect!
The idea is:
- Traversing the whole DOM reading all the bounding rects (top, left, width, height) of every element.
- Creating a bi-dimensional array in which every item represents whether a pixel of the page comes from the Path Design System or not.
- Counting the pixels in the array.
To show how the bi-dimensional array could look, consider a hypothetical 20x20 pixels page with some flex containers, a heading, two buttons, and a footer that are Path Design System components.
[
[🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥], // <-- non-DS components' borders are red
[🟥,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟥],
[🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥],
[⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️],
[🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥],
[🟥,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟥],
[🟥,⬜️,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,⬜️,🟥],
[🟥,⬜️,🟩,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟩,⬜️,🟥],
[🟥,⬜️,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,⬜️,🟥],
[🟥,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟥],
[🟥,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟩,🟩,🟩,🟩,🟩,⬜️,🟩,🟩,🟩,🟩,🟩,⬜️,🟥], // <-- DS components' borders are green
[🟥,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟩,⬜️,⬜️,⬜️,🟩,⬜️,🟩,⬜️,⬜️,⬜️,🟩,⬜️,🟥],
[🟥,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟩,🟩,🟩,🟩,🟩,⬜️,🟩,🟩,🟩,🟩,🟩,⬜️,🟥],
[🟥,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟥],
[🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥,🟥],
[🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩],
[🟩,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟩], // <-- The rest of the page is white
[🟩,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟩],
[🟩,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,⬜️,🟩],
[🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩,🟩],
]
For the sake of simplicity, the example shows a 1-pixel border for all the components. In reality, the component weights (see the previous article’s chapter about the component weights) decide the width of the border.
A bitmap is generated for each visual coverage container. In the case of nested containers (like a page that belongs to team A but contains a component that belongs to team B), the area of the nested container looks empty in the parent container’s bitmap.
Please note: there are two approximations in the DOM->array transformation. The algorithm does not respect:
- CSS stacking contexts (aka z-index): every element is above the previous one. This isn’t a big deal, given the nature and structure of Preply.com’s pages (which have a document-like structure, rarely relying on z-index, apart from the Classroom).
- Content overflow: this limitation can be easily overcome by using containers, which limit the boundaries of the children.
We haven’t tested the visual coverage code on heavy fixed, and non-scrollable UIs. If you do, please let us know 😊.
Performance optimizations
We quickly validated the idea with a throwable POC (this is where ChatGPT shines). Still, we stepped into severe performance optimizations to move from the initial 150 blocking milliseconds for a tall page (on my machine™) to the final 50 non-blocking milliseconds. Also, we moved from 30 MB to 300 KB regarding memory footprint. To achieve these improvements, we also involved Massimiliano Mantione, a former Google Chrome V8 Engineer, and Matteo Ronchi from WorkWave (software architect), who is now using the same visual coverage approach on a very different-from-Preply product.
We applied some obvious optimizations (like skipping SVG contents or hidden elements) and some less obvious ones, like:
- The array should be monodimensional instead of bi-dimensional. This optimizes memory access since accessing a memory address to find another memory address (required to access an item of the array stored inside another array) is not optimal for the CPU.
- Every item of the array must use the minimum possible number of bits. Strings are wrong from this point of view because their memory usage is dynamic by default. Typed Arrays is the best choice.
- If you write branches-free code (if conditions are branches), the JIT compiler can optimize your code, resulting in C++-like performances.
- The “count the pixels” code could be moved to a Web Worker and run without interruptions. This does not greatly improve performance but significantly impacts the code’s readability. The equivalent calculation made through requestIdleCallback forces you to write interruptible code, which is inevitably way harder to read than straightforward synchronous code running in a Web Worker.
All the above allowed us to reduce a lot the execution time, and some tests on old devices showed the following:
- OnePlus 6 (top-notch 2018 phone): 650ms.
- Huawei P9 (cheap 2016 phone): 900ms. Interestingly, the Huawei P9 (4x times slower in everyday use) performs <50% worse than the OnePlus 6. The hardware is okay with this kind of browser operation.
Is it necessary to make the script 100% non-blocking? The answer is yes! Our initial implementation included a very small blocking phase. Still, two users (out of the 1% Preply users included in the initial experiment) faced a prolonged blocking phase, with a worst-case duration of more than 400ms. See the two spikes in our monitoring graph.
By making the script asynchronous, we have to live with the fact that if we split reading the DOM over multiple idle callbacks, we could count the coverage on a half-changing page. We think this is tolerable, especially given that if the count is 100% non-blocking, we can run it whenever we want (also immediately after the React application is initialized), which increases the chances of intercepting users who quickly move through pages (if you count the coverage infrequently, some pages could never get included in the stats).
To limit this issue, we stop counting the coverage when the user clicks, when the URL changes, etc.
Sample rate and events cardinality
As mentioned, we measure the visual coverage on all the user’s devices every thirty minutes (at .00, and at .30), across all Preply.com’s pages, whatever the users are doing. At the moment of writing, we collect 150K-350K events per day. They are sent to a custom endpoint and forwarded to our paid DataDog.
We don’t pay DataDog based on the number of events but their cardinality. The cardinality is inherently high because the most essential event data are the team (16, at the moment of writing) and the component (the page or component name. They are more than 100). We have sacrificed other data to reduce the event’s cardinality and not incur additional costs (like the user type, the release version, etc.).
App (React Native) implementation
Let’s start with two considerations that differentiate tracking the coverage on Preply’s app compared to Preply.com:
- When we started working on the app, React Native didn’t provide handy performance-oriented APIs like requestIdleCallback or Web Workers. We can’t easily be 100% sure we don’t impact Preply’s users’ UX, so we don’t measure the Design System’s visual coverage in production.
- The app’s E2E tests cover 100% of the use cases.
Please note: requestIdleCallback for React Native’s new architecture was released in August 2024 (see the PR) as part of React Native v0.75. After we upgrade the app to React Native v0.75.0, we can also release the coverage count for the app in production.
We then leveraged the app’s E2E tests to count the coverage and send the results to DataDog.
The majority of the implementation is the same as the Web counterpart. There are only two differences compared to the Web implementation:
- How we retrieve the UI elements’ boundaries. By design, the web offers an open and parseable tree, but mobile apps don’t.
- How do we identify Path Design System components? This heavily depends on the first point.
After some experiments, we decided to
- Leverage some straightforward Swift code to gather all the elements’ boundaries (at the moment, we are counting the coverage only on iOS)
- Identify Path Design System components through React Native testID, which converts to iOS’ accessibilityIdentifier. This also required updating our test utilities because testID is used in the E2E tests to interact with some elements.
- Launch the coverage count from E2E tests through a dedicated deep link, one of the few options for triggering an internal function from an external testing tool.
What about view hierarchy files?
We experimented using the iOS View Hierarchy. On paper, it would have worked (see screenshot) since it contains all the data we need, but unfortunately, it’s a binary file and can only be read if it is XCode.
The view hierarchy of Preply’s App’s main screen contains all the data we need. However, this data is inaccessible outside of XCode.
The Design System visual coverage score and dashboard
The official Preply’s Design System visual coverage dashboard.
We have two dashboards:
- The official one is used and referenced by everyone to check the current coverage, identify possible areas for improvement, and eventually set teams’ OKRs.
- A monitoring one is used by the Design System team to identify errors, slower calculations, distribution over different devices, etc.
How We Test the Visual Coverage Code
Only some key parts of the whole code are tested. This is enough for me to be sure the critical parts of the code work. The most exciting part is how the main web function (createCalculateDsVisualCoverages
) is tested: through Vitest browser mode. Testing the visual coverage code based on such native but complex browser APIs (requestIdleCallback and Web Workers) is key to writing reliable unit tests. You can check the test code here.
Apart from that detail, all the code heavily relies on dependency injection to test, which is the easiest possible way.
How Teams Can Play with the Visual Coverage
We expose some utilities on the global object (window) to allow teams to play with the visual coverage. exposeGlobalDsVisualCoverageObject is the function that exposes them. The most important ones are the following:
- __PREPLY_DS_COVERAGE.runAndLog(): it runs the calculation, it logs everything in the console, but it does not send calculation events to DataDog.
- __PREPLY_DS_COVERAGE.runAndVisualize(): The same as above, but also add some visible rectangles to the page.
- __PREPLY_DS_COVERAGE.runAndVisualizeContainer(componentName: string): The same as above but only for a single container. This is particularly useful for modals.
- __PREPLY_DS_COVERAGE.reset(): Removes the colored rectangles.
So, everywhere (on prod, staging, Storybook, locally), product teams can just launch __PREPLY_DS_COVERAGE.runAndVisualize() in the browser’s console to get the calculation.
Test it on Your Product
After reading the why behind this project in the initial article, and digging deeper into all the tech details here, the next step is to experiment with the visual coverage yourself. In the preply/design-system-visual-coverage repository, we shared:
- The entire implementation of visual coverage (we will update it periodically as we evolve it) lives in our design system repository.
- All the code that consumes the visual coverage APIs in the product and converts the data to DataDog.
If you try the same approach in your product, you tweak the code, etc., please share what worked and what did not for you 🤗
Special Thanks
I want to publicly kudos all the people involved in this project:
- The whole design system team: Justine, Seif, Alex, Javi, Alisa. I love this team 😍
- Vadym, Igor, Josh, Dmytro, Dominika, for the support and feedback 😊
- Oleg, for creating the dashboard and helping with all the data.
- Daniel, Alberto, for the early internal feedback.
- All the Preply front-end engineers for supporting the design system, and this initiative. All the Preply app engineers for helping with the React Native implementation.
- Javier, Serhii, Volodymyr, for supporting me with the React Native implementation.
- Matteo, for trying the visual coverage with a different company.
- Massimiliano, for the performance suggestions.
- Dmitry, Omri, for the feedback.
- Nicolas, for the frigging detailed review ❤️.
Would you like to join me and work in a purpose-driven organization where work, growth, and learning happen at the same time? Preply continues growing and we are actively looking for talented candidates to join our Engineering team! If you are excited about taking on a new challenge, check out our open positions here.
Posted on October 3, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
October 3, 2024
October 3, 2024