Sylvain Hamann
Posted on June 21, 2022
In this post I will share tips from my previous experiences in order to test the accessibility of your web application or website.
What's accessibility?
In my own words I would say that the goal of accessibility (often abbreviated to a11y) is to make sure that anyone on any medium can use your web application or website. A lot of us browse the web on a computer with a keyboard and a mouse or via a mobile device with a touchscreen. Some people might use a keyboard exclusively or others assistive technologies such as screen readers (by the way mobiles have screen readers too!). We have to make sure that our user interface is usable in all those cases.
If you seek for a deeper explanation about a11y, I suggest you to read this MDN doc.
Pull request review
When someone opens a pull request affecting the user interface, I expect to be able to try their changes locally or on some sort of staging instance. I won't just review the code, I will also test the app in multiple scenarios e.g. different browsers with several screen resolutions. But don't be biased by your own habits. All users might not consume the Web like you do. E.g. People don't all have a powerful MacBook Pro with a giant external monitor and high speed internet. That's why it's important to write down some testing practices that each reviewer should go through before approving any PR. For example it could say "test the changes with a combo of mouse and keyboard, keyboard only, screen reader only, touch screen, disable motions etc..". This might seem long but automatic tests or tools won't catch every flaws in your UX and it's better to avoid accumulating too many issues.
Tools to audit a web page
There are several tools that will help you to detect a11y problems:
- Chrome users might already know the Lighthouse tab in their dev tools. It can generate an accessibility report.
- However I prefer the axe DevTools extension because it's much faster to run. It highlights the nodes concerned by each issue and shares great tips to fix them. For example it will detect color contrast issues, images with no alt text, invalid HTML semantic...
- Chrome DevTools can also help to fix contrast ratios or visualize the accessibility tree.
- You might know Browserstack that lets you do cross browser testing on desktop & mobile with real browsers. Let me introduce you Assistiv Labs that will let you use real assistive technologies from different OS. It's not just screen readers, you can try others technologies such as the Windows high contrast mode.
ā ļø Having a Lighthouse score of 100 or having zero issue in your axe report does not mean that your web app is fully accessible.
Automatic tests
DOM Testing
What ever the framework or library you use, at the end your user's browser will always render HTML. Using semantic tags such as main
, header
, footer
are not only meant for SEO. Screen readers will rely on them in order to describe the current page to the user. Testing Library will help you to write automatic tests based on the DOM. The goal is to have tests that represent a real user flow instead of testing implementation details. It's compatible with most of popular libraries, frameworks and tools.
I recommend to always render the whole app / page and use API such as within if you want to test only one section of the UI. If a component is meant to be re-used across the apps (e.g. when creating a design system). Then I will also create a test suite juste for that component.
Let's see different examples based on real test I wrote in the past:
I contributed to a Single Page Application built with React. Since the browser would not reload after a page change I had to implement few logics to improve the accessibility:
- focus the
h1
of the new page; - render an
aria-live
region announcing the new page;
Tests would like this:
userEvent.click(screen.getByText('Page link'));
expect(screen.getByRole("heading", {name: 'Page title', level: 1 })).toHaveFocus();
expect(getByText("You navigated to Page title")).toHaveAttribute("aria-live", "polite");
With my team we had to build two different view modes for the same dataset. In order to do that we made two buttons using the aria-pressed
attribute. We wrote tests that interacted with each button and asserted their "pressed state":
expect(screen.getByRole("button", { name: 'Table view', pressed: true })).toBeInTheDocument();
expect(screen.getByRole("button", { name: 'List view', pressed: false })).toBeInTheDocument();
You might be used to see mobile designs with a š icon to expand the navigation. For this you could write tests that assert that the icon has the correct alternative text and check that the aria-expanded
, aria-controls
and id
attributes are present on the right nodes. Plus you can use the isInnacessible API to make sure that the mobile menu is correctly hidden but still present in the DOM. The mobile menu usually has a focus trap so I would use userEvent
to press tab multiple times and make sure it loops correctly.
As you can see, when writing automatic tests you also have to remember that users don't only use just a mouse to interact with your UI. It's good to cover every scenarios to avoid any regression, especially those that you won't be able to see easily.
Visual tests
With tools such as Chromatic or Percy you could render a page after certain actions and make a snapshot of it in order to detect visual regressions. When building a design system you could have several snapshots of each component in different states such as the focus state make sure the focus style is not broken.
Chromatic depends on Storybook which also provides accessibility tests that could run in your CI too. This would help to automatically detect regressions without running axe devTools on every pages.
Visual tests can be very annoying if they are not stable so make sure to fix flaky tests when you detect them. A co worker found out that disabling CSS animations when running those tests helps a lot.
Ask real humans
If you can afford having real users to test your web app then please do it! This is not meant to replace any manual or automatic test you would do before shipping a PR. The purpose is to challenge all the assumptions and decisions you made during the design and development phases. I have seen many companies using tools such as inVision or Figma prototypes during those interviews. I guess that's great during the prototype / design phase but don't forget to test the real app with real users that rely on assistive technologies. After running those interviews or after a WCAG audit, any fix should come with an automatic test in order to prevent any regression in the future.
Team communication
This paragraph is not really about testing but I wanted to give one last advice as a conclusion. I think my biggest frustration as a frontend developer is to have "incomplete" designs that force me to improvise or do a lot of back and forth with the design team. In order to avoid that it's good to set your expectations with your design team before they send you their design, for example:
- each image or icon should have an alternative text;
- focus management: what should be the focus order? what element should be focused when revealing this section? etc...
- each page should have a document title, a h1, each section should have a h2 etc...
- what should be the alternative animation when reduce motion is on?
Building a design system will help a lot since designers could re-create each component in their Figma and the whole team would use the same vocabulary when talking about the UI. Documenting each component with best practices and listing DO and DON'T actions is also very helpful.
Happy testing!
Posted on June 21, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
October 30, 2023