Front End Testing Overview: main testing types, where to use each test, and good practices for effective Automated UI Testing
Alaa Mohammad
Posted on October 16, 2023
When building any application, we find that there are many types of tests that can be applied, which leads to the necessity of knowing the basic purpose of each of them and what it offers us when used. Especially now that testing has become a crucial part of any application development life cycle.
In the following, I will provide the definition of the most widespread types of tests in the frontend development process, along with a comparison between them according to some important aspects, and finally a review of some points that are considered good practices to follow to write effective automated UI tests.
1- Unit testing
It tests small individual building blocks of the application (an individual function) to find errors in an early stage of the development phase. Which makes fixing them easier. In unit testing, you don’t test the entire flow of your application but only individual functions. So, you can ensure the correct functionality for those individual functions (units).
2- Component testing
This test uses real-life scenarios by testing not only components' functionality but also styles and appearance, which provides you with a good indication of how the software will perform once it is released.
- Unit testing and component testing work best together. So, the developer can use unit testing during development. Then, after the component is completed, the tester can use component testing.
3- Integration testing
This test is used to ensure that several units will work well when they are combined and to identify bugs in the interaction between integrated units.
4- Smoke testing
It is a broad but shallow functional test performed after delivering a new build of an application. It tests the entire added feature from end to end.
5- Regression testing
This test is used to make sure that updates to your code or fixing bugs don’t cause any unintended effects or break any existing features (Regression bugs).
- Visual Regression Testing
It is just like the Regression testing, which means it generally validates that the changes you make to your code do not have an unexpected impact on your application. But it's more about the user interface than the effectiveness of our system. Therefore, it ensures that code changes do not break the appearance of the application's user interface.
- Sanity Testing
It is considered a subset of Regression Testing. Which ensures stability after bug fixing, adding new functionality or any other code change. It also verifies the software and ensures whether it’s ready for the next level of testing or not.
6- E2E (End-to-End) testing
This test simulates user interaction by testing the entire application workflow to ensure that you have the correct functionality of the main application features.
7- Cross browser testing
It is an integral part of E2E testing. Which ensures that the application provides a user experience that works across different browsers on a variety of devices, platforms, and environments.
- Accessibility testing
It is considered a subset of cross-browser testing and a part of Usability testing. It ensures that a company's website, applications, and digital content are accessible to individuals with all abilities and with disabilities (visual, auditory, cognitive, and motor). For example, they should be compatible with assistive technologies such as screen readers.
8-System Testing
It is performed on an integrated system to evaluate its compliance of that system with the corresponding requirements.
9- Acceptance testing
This test is performed in an environment similar to production to make sure that the software is functioning as expected.
Understanding the differences between all these tests is essential to knowing when to use each test. I will talk about the differences between these types in terms of the following factors:
- At which stage of the development life cycle do we use each test?
- Which team could use each of these tests?
- System coverage when using each test.
1- Unit testing
- We use it to define any potential bugs in individual functions at an early stage of the development process before the code is integrated and tested as a whole system.
- It is typically performed by developers.
- It covers the inner workings of a system, like manipulating data between the frontend and a database or mapping data to be used in an API call.
2- Component testing
- It is performed before Integration testing and after Unit testing.
- It is typically performed by developers or maybe by dedicated testers.
- This test is performed on the whole code section after the development process has been completed.
3- Integration testing
- It is performed before System test or E2E test and after the Unit test.
- It is typically performed by (QA) team and also by developers.
- This test involves integrating the different modules or units of an application and then verifying if the individual units are communicating with each other as expected.
4- Smoke testing
- It performs after the developers deliver a new build of an application and before the new build is deployed to (QA) team. It performs on initial builds (in the initial phase of the application development life cycle – Unstable building -).
- It is typically performed by (QA) team and sometimes also by developers.
- Covers end to end basic functionalities.
5- Regression testing
- When making any updates, after fixing bugs and adding some new features, or maybe after importing some new data. It is performed after the development team has completed their work (adding features, fixing bugs, etc.).
- It is typically performed by (QA) team.
- It tests specific features or bug fixes after the development team has completed their work.
- Sanity testing (a subset of Regression Testing):
- It is performed only after the build has cleared the smoke test and been accepted by the QA team for further testing (it is performed on a stable building).
- It is typically performed by (QA) team.
- Covers certain modules, in which code changes have been made.
6- E2E testing
- This test is used to verify the whole system’s functionality.
- It is typically performed by (QA) team.
- It tests all components and all integrations included in the application's workflow.
7- Cross browser testing
- This test is performed during development. So, any new features will be tested to ensure they're cross-browser compatible before pushing them to production.
- It is typically performed by (QA) team.
- The full application or a new feature.
- Accessibility testing (a subset of Cross browser testing):
- We evaluate accessibility early and throughout the development life cycle to verify accessibility problems early, when it is easier to address them.
- It is typically performed by testers and users to define any potential gaps between users with different technical and web expertise.
- It is performed on the company’s website, applications, and digital content.
8- System Testing
- System Testing is performed after the Integration testing and before the Acceptance testing.
- It is typically performed by a testing team.
- It is performed on a complete integrated system.
9- Acceptance testing
- It is the final stage of the software development process and is only performed when the development team is confident that any issues or bugs have been resolved. So, it performed after system testing and before making the system accessible to real end users.
- It is typically performed by the customer or end-users.
- The whole system or a feature in an environment similar to production.
Good practice to follow for writing effective Automated UI Testing:
1- Add data attribute as the used identifier:
Even though, we can use IDs or classes in our selector when writing automated UI tests, it is much better to use data-* attributes for your automated testing
The data attribute is according to mdn web docs:
allow us to store extra information on standard, semantic HTML elements without other hacks such as non-standard attributes, or extra properties on DOM.
Using Data attribute is better for the following factors:
- The main purpose of the class attributes is to style elements, and the main purpose of IDs is to have a unique reference on every page. These attributes can be changed very often. Also, IDs and classes can be dynamic, so this could cause a test to break when fetching new data. So, the best way is to separate the concerns of markup and testing, and this can be done by using a special attribute for testing purposes. Which leads us to the next factor.
- When using data-* attributes for your tests, which have been added only on purpose to facilitate testing and not affect the element selection for styling in CSS or code logic in JavaScript, this will facilitate the maintenance and readability of the code because DEV teams will not change these attributes in any future code refactoring as these attributes are dedicated for testing purposes only.
2- Having a naming convention for your data-* attribute like the following:
data-testid = ComponentName-SubComponent[Optional]-elementName-elementPurpose[Optional]
ComponentName can be the same of PageName, for example in checkout page the main component could be CheckoutDashboard
data-testid = CheckoutDashboard-PaymentForm-btn-payment
data-testid = CustomTextField-label-error (a shared component)
data-testid = AuthDashboard-btn-signin
data-testid = FilterPanel-div (a shared component)
By using a meaningful and straight forward naming convention, developers and testers can quickly locate and interact with elements for testing or debugging purposes.
3- Keep tests small, readable, stateless and focused:
- Write small subtests in every suite.
- Use comments for any code that may be difficult for your teammate to understand.
- Each test shouldn’t leave side effects after execution (changing anything outside their scope).
- Avoid creating tests that test multiple functionalities at once. This can help you pinpoint and maintain the failed part easier and more efficiently.
- Each test should be focused and test a single aspect of your application.
4- Finally, writing good tests is not just about simulating what a real user scenario looks like just to get passing tests (Green) - Covering Happy Path -, but your test should have as many subtests as it can. Which covers edge cases by using manual or automated testing that simulates the expected and unexpected scenarios of the end-user. For example, completing only half of the multistep process, character limits, field validation, exceptions in function calling, and form submission to detect issues or bugs that may have been considered outside the typical user scenario flow.
Posted on October 16, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.