Automated Testing with Playwright
Tony Wallace
Posted on March 28, 2023
There are four basic types of software test that can be automated.
- End-to-end testing of user flows with live data (integration testing)
- End-to-end testing of user flows with mock data (isolated user interface testing)
- Isolated testing of individual components
- Unit testing
This article will focus on end-to-end testing, but first a few notes about the other types of tests.
Playwright has experimental support for testing components in React, Vue and Svelte. I have not yet been able to integrate this into one of RedBit's React projects because Playwright uses a different bundler than us. (Playwright uses Vite while we use Webpack.) Many of our components rely on specific Webpack configuration and some custom plugins that can't easily be replicated in Vite. While I'm sure this is a surmountable problem, I'm not sure it's worth the effort. It would likely take less time to build out pages to render components that would be testable with the regular Playwright APIs, and could also serve as a reference library for developers.
Playwright isn't first and foremost a unit test runner, so I won't discuss it in that context. RedBit uses Jest or unit testing in web projects. Jest uses a similar assertion syntax to Playwright, which helps reduce cognitive overhead for developers.
End-to-end user flow testing
End-to-end testing of user flows aims to simulate the actions a user would perform while using an app, and verify that those actions have the expected outcomes. This is Playwright's main purpose.
Automating a user flow
Writing automated end-to-end tests normally involves determining the sequence of actions that a user would perform in the app, translating them manually to code, then adding assertions to verify that the expected actions were actually performed. For example, you might navigate to a certain page in your app and verify that the browser's location is set to the expected url, then simulate a click on a link and verify that the browser's location has changed to the link's url. At points along the way you might want to verify that certain messages or other components are visible on screen.
Playwright provides a test generator that takes a lot of the drudgery out of automating tests for long user flows. It launches your app in a Chromium instance alongside a second process that records all the actions you perform in the app. You navigate through a user flow in your app and the test generator translates your actions to code. The test generator will also add some basic assertions, like testing that the browser location updates to the expected url when you click a link. You can then copy the test code to your project and add other assertions manually.
The test generator worked quite well for the flows that I automated, except that it failed to capture the browser back button. That resulted in tests that would fail unless modified to restore the missing navigation actions. Even if the code output by the test generator needs some work, it's still a win in my opinion. The effort necessary to fix the tests will likely be far less than the effort that would have been required to write them from scratch.
Testing with mock data
When you run the test generator, it launches your app, which is presumably backed by an API or some other data source. The data source might be a production environment (but hopefully not), or a remote test environemnt, or maybe a dev environment on your local computer. Either way, you're testing with live data. The problem with live data is that it's often subject to change, and when it changes your tests will probably fail. Consider the following scenario:
- Navigate to a page
/products
that renders a list of products. - Click the first item to navigate to
/products/<id>
, which displays details about an individual product. In this example,<id>
represents anid
property that is assigned to the product's database record. - Assert that the details page url contains the correct
id
property.
The Playwright test generator will write code that performs these actions based on rendered data, which will look more like this:
- Navigate to
/products
. - Click the link that contains the text "Cuisinart Food Processor".
- Assert that the details page url is
/products/34
.
test('Products list and detail navigation flow', async ({ page }) => {
// Navigate to the products page:
await page.goto('/products');
await expect(page).toHaveURL('/products');
// Click the "Cuisinart Food Processor" link:
await page.getByRole('link', { name: 'Cuisinart Food Processor' }).click();
await expect(page).toHaveURL('/products/34');
// Navigate back to the products page:
await page.goBack();
await expect(page).toHaveURL('/products');
});
This will only work as long as the first product in the list is "Cuisinart Food Processor" with an id of 34. If the list is updated and another product is now first in the list, or if you test in another environment where the products have different ids, the test will fail. There are two solutions to this.
Mocking API responses
The easiest solution is to fulfill API requests with mock data. Playwright provides a way to do this simply and cleanly by intercepting requests to a particular route:
test('Products list and detail navigation flow', async ({ page }) => {
// Fulfill the products list API request with mock data.
await page.route('/api/products', (route) => {
return route.fulfill({
status: 200,
body: JSON.stringify([
{ id: 34, name: 'Cuisinart Food Processor' },
{ id: 75, name: 'Vitamix Blender' },
]),
});
});
// Navigate to the products page:
await page.goto('/products');
await expect(page).toHaveURL('/products');
// Click the "Cuisinart Food Processor" link:
await page.getByRole('link', { name: 'Cuisinart Food Processor' }).click();
await expect(page).toHaveURL('/products/34');
// Navigate back to the products page:
await page.goBack();
await expect(page).toHaveURL('/products');
});
In this example, a request to /api/products
will be fullfilled with the JSON-serialized test data. (It is assumed that we're mocking an API that sends JSON responses, but you can replace the test data with whatever is appropriate for your application.)
With reliable test data, you can be assured that the first item in the list of products will never change, unless you change it. As long as the app's behaviour remains the same, your tests will always pass. This is not to say that you should never test with live data. If you're running an integration test, you may need to verify a complex series of actions during which data must be written to, read from, and deleted from a database. However, there will be many situations in which you will only be concerned with testing one part of the system (e.g. the user interface) and you should be able to run your tests in isolation.
Testing based on structure, not content
Returning to the test case we looked at before, we can see that it relies on specific content to locate the first product on the page:
- Navigate to
/products
. - Click the link that contains the text "Cuisinart Food Processor".
- Assert that the details page url is
/products/34
.
test('Products list and detail navigation flow', async ({ page }) => {
// Navigate to the products page:
await page.goto('/products');
await expect(page).toHaveURL('/products');
// Click the "Cuisinart Food Processor" link:
await page.getByRole('link', { name: 'Cuisinart Food Processor' }).click();
await expect(page).toHaveURL('/products/34');
// Navigate back to the list:
await page.goBack();
await expect(page).toHaveURL('/products');
});
Notice that this test doesn't care where the "Cuisinart Food Processor" link is rendered. We're expecting it to be in a list of products but the test doesn't verify that. It could be anywhere on the page. That may or may not be important to you, but it's worth pointing out.
We could rewrite this sequence to depend on page structure, instead:
- Navigate to
/products
. - Extract the detail page url from the first link in the products list.
- Click that same link.
- Assert that the details page url matches the url from step 2.
A test written this way would be content-agnostic and target elements precisely:
test('Products list and detail navigation flow', async ({ page }) => {
// Navigate to the products page:
await page.goto('/products');
await expect(page).toHaveURL('/products');
// Get the first link in the products list and extract the detail page url:
const link = page.locator('.ul.products > li > a').nth(0);
const url = await link.evaluate((node) => node.getAttribute('href'));
// Navigate to the product detail page:
await link.click();
await expect(page).toHaveURL(url);
// Navigate back to the products list:
await page.goBack();
await expect(page).toHaveURL('/products');
});
The trade off is that you need technical knowledge of your app to write tests based on structure. This approach may not be feasible depending on who in your organization will be responsible for testing. It isn't a replacement for reliable test data, but provides another way to make your tests more accurate and resilient.
Testing API requests
You may have cases in which it is important to verify that your app makes specific API requests. For example, you might want to test that a new API request is made when the user selects a filter, and that the request is configured with the filter they selected. Playwright allows you to wait for a request and obtain information about it. The following example verifies that a GET
request is made for the /products
list with certain pagination and sort params:
test('A request is made for the first page of products in descending order of creation', async ({ page }) => {
const request = await page.waitForRequest('/products**');
await page.waitForLoadState('networkidle');
// Verify that the request was configured correctly:
const url = new URL(request.url);
// Expect a GET request to /products?offset=0&limit=10&orderBy=createdAt&order=desc
await expect(request.method).toEqual('GET');
await expect(url.searchParams.get('offset')).toEqual('0');
await expect(url.searchParams.get('limit')).toEqual('10');
await expect(url.searchParams.get('orderBy')).toEqual('createdAt');
await expect(url.searchParams.get('order')).toEqual('desc');
});
If the request is made with any other method, or any other values for the offset
, limit
, orderBy
and order
params, the request will fail.
Note: The wildcard (**
) at the end of the url tells Playwright to match any request for /products
regardless of the query params. Without it, the request would only be matched if it was made without any query params.
Testing rendering accuracy
If you have a reliable and stable source of test data (see Mocking API responses) it's possible to test that your data was rendered according to requirements. The process is as follows.
For each item in your test data:
- Prepare the properties of the test data as you would expect them to have been rendered. For example, if you have a number formatter that renders a number (
25.00
) as a currency string ('$25.00'
), apply it to the number. (If your user interface is localized, make sure your tests use the same locale as the app. If you're being thorough, you may want to run separate tests for each locale.) - Locate the element that corresponds to each property in the DOM tree and extract the rendered value.
- Assert that the rendered values are equal to the formatted values.
import { productsTestData } from './test-data';
import { formatCurrency } from './utilities/currency';
test('Products list renders as expected', async ({ page }) => {
await page.goto('/products');
await expect(page).toHaveURL('/products');
for (let i = 0; i < productsTestData.length; i++) {
const product = productsTestData[i];
// Format the expected values.
const expectedLink = `/products/${product.id}`;
const expectedName = product.name;
const expectedPrice = formatCurrency(product.price);
// Get the DOM node that contains the product.
const node = await page.locator('.ul.products > li').nth(i).evaluate((node) => node);
// Get the rendered link href.
const renderedLink = node.querySelector('a').getAttribute('href');
// Get the rendered product name and price.
// Trim the values to ignore any whitespace introducted during rendering.
const renderedName = node.querySelector('.product-name').textContent.trim();
const renderedPrice = node.querySelector('.product-price').textContent.trim();
// Assert that the rendered values equal the expected values.
expect(renderedLink).toEqual(expectedLink);
expect(renderedName).toEqual(expectedName);
expect(renderedPrice).toEqual(expectedPrice);
}
});
How much should you test?
The complexity of your end-to-end tests will more or less reflect your application's complexity. The more information you render, the more you have to test. The example above only expects the link, product name and product price to be rendered as specific strings. It doesn't test that the layout and styling are correct, or even that the elements are visible. It's possible to write more comprehensive tests, but doing so requires more development time. Your tests will likely be invalidated more often, which will result in more failures. There are costs to consider and questions to ask:
- How mature is your product? Is your user interface subject to frequent design changes or is it stable?
- Can you afford the impact that more complex tests and more frequent failures will have on your dev team's velocity?
- At what point does the cumulative cost of test development and maintenance exceed the cost of human QA testing?
- At what point does the added time pressure cause developers to give up and remove failing tests instead of fixing them, rendering your investment pointless?
There are no right answers to these questions. The testing strategy you choose should depend on your organization's priorities and may evolve over time. An early-stage startup might prioritize high-level testing of user flows and limit rendering tests to the critical path. As the organization matures and their capacity improves, they might start to add tests for other parts of their product, or make existing tests more comprehensive, or both.
Remember that the goals of automated testing are to reduce human time and effort, and to improve consistency. The highest value automated tests are those that have to be run most often and require the most attention to detail – complex flows on your application's critical path. Start by identifying opportunities to reduce labour in those areas and gradually increase yor test coverage from there.
Posted on March 28, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.