Parallel Testing: The Essential Guide
Sourojit Das
Posted on April 15, 2022
Parallel Testing: The Essential Guide
With more and more software engineering teams gravitating to the CI/CD model, the onus to reduce the time to market by delivering a foolproof product is increasing. And in QA, the only way to ensure minimal impact from bugs is to test the code as soon as possible and fail fast.
For QA teams, it means increasing test coverage through automation, but often the sheer number of tests required to be performed along with the paucity of time to test the features means that classical sequential automated tests are not enough. Parallel testing allows teams to perform automated tests against several configurations at the same time to scale downtime and budget constraints while allowing for increased test coverage and ensuring quality.
Parallel Testing: What it is and How it works
Parallel Testing is a process to leverage automation testing capabilities by allowing the execution of the same tests simultaneously in multiple environments, real device combinations, and browser configurations. The overarching goal of parallel testing is to reduce time and resource constraints. Unlike distributed testing, where the different test components running in a distributed manner interact with each other, the parallel test approach is free from any such interaction.
To better understand how parallel testing helps boost standard automation testing capabilities, let us take a simple example --- an automated functional test of a signup form. To perform this test for 45 different Browser/OS configurations, with each test taking an average time of 2 minutes, then the total run time of tests would be 90 mins or 1.5 hrs when run successfully in sequence.
Now imagine, when running 3 parallel tests simultaneously, the total execution time would have come down to 30 mins.
And for 6 parallels, it would be even further reduced to 15 mins- a far cry from what was expected before.
Considering this scenario to be one of the most basic functional tests that are automated, it boggles the mind how parallelization of tests can shave valuable minutes and hours from the most cumbersome of test suites.
The number of parallels required to optimize test times depends on the following key factors:
- The current number of browsers (and/or) Devices on which the scenario needs to be tested
- Number of parallel tests currently being run ( we will keep this as 1 if only sequential tests are being run)
- Current Build Time
- The desired number of browsers (and/or) Devices on which the scenario needs to be tested, and
- Desired Build Time
For more complex scenarios involving a shift in any of these parameters, it is best to use a Parallel Test Calculator to understand the number of parallel sessions required to achieve test coverage and build execution time goals.
When to perform Parallel Testing
Now that we have some inkling of how parallel testing works and how it can optimize test execution time frames let's have a look at certain scenarios where parallel test execution is ideal:
- In case the software engineering team is uncertain about the functionality of an application across devices and configurations, especially in regression-like scenarios with an updated version of an app.
- In a case where legacy data is imported from an older system to a newer one, Parallel tests can be immensely helpful to check if everything has been moved across seamlessly using an automated regression test suite.
- For cases of automated browser testing, when an app needs to be tested across different combinations of browsers, OS, and devices.
How to approach parallel test creation
A common way to perform parallel testing is to create a suite of projects to test browser compatibility and then a master project to test the aforementioned set of projects. Upon set up, it is time to create a set of Entry and Exit Criteria.
For Entry Criteria, we must define those checkboxes that must be ticked off before test execution starts, for e.g.
- Set up a suitable test environment, for instance, Selenium with TestNG
- Define the pre-conditions and test scenarios for the automated browser test process
- Migrate any new and legacy data required
The exit level criteria can focus more on:
- Understanding the difference in terms of test suite execution impact
- Measuring the output of a new system as compared to a legacy system (if applicable)
Parallel Testing using TestNG and Selenium
TestNG is a Java-based testing framework that helps to organize tests in a structured manner and enhances the maintainability and readability of test scripts. The large feature set of TestNG allows for the easy configuration of parallel tests. The auto-defined XML file in the TestNG framework allows testers to set the parallel attributes to methods/tests/classes by leveraging the allowance for multithreading in Java. This file can help determine the number of threads desired for parallel test execution.
The parallel attribute can be extended to:
- Methods: To run methods in separate threads
- Tests: To run all methods sharing the same tag in the same thread
- Classes: to run all methods belonging to a class in a single thread
- Instances: to run all methods in the same instance in the same thread
Whereas the thread count attribute can help define the number of threads the tester wishes to create when executing the tests in parallel.
The below code snippet helps to demonstrate parallel test execution for opening the BrowserStack homepage using two methods. One of the methods performs this action on Google Chrome, whereas the other does so on the Firefox browser.
import org.testng.annotations.AfterClass; import org.testng.annotations.Test; import org.testng.annotations.Test; import java.util.concurrent.TimeUnit; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.edge.EdgeDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.openqa.selenium.ie.InternetExplorerDriver; import org.testng.annotations.Test; public class ParallelTestWithMultiThread { WebDriver driver; @test() public void testOnChromeWithBrowserStackUrl() { System.setProperty("webdriver.chrome.driver", ".\Driver\chromedriver.exe"); driver=new ChromeDriver(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); driver.get("https://www.browserstack.com/"); driver.manage().window().maximize(); System.out.println("this is the test related to chrome browserstack homepage"+ " " +Thread.currentThread().getId()); } @test() public void testOnChromeWithBrowserStackSignUp() { System.setProperty("webdriver.gecko.driver", ".\Driver\geckodriver.exe"); driver=new FirefoxDriver(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); driver.get("https://www.browserstack.com/users/sign_up"); driver.manage().window().maximize(); driver.findElement(By.id("user_full_name")).sendKeys(""); driver.findElement(By.id("user_email_login")).sendKeys(""); driver.findElement(By.id("user_password")).sendKeys(""); System.out.println("this is the test related to chrome browserstack login"+ " " +Thread.currentThread().getId()); } @AfterClass public void close() { driver.quit(); } }
The TestNG XML file below helps to set up the parallel attribute as well as the thread count for the task.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd"> <!-- Test --> <!-- Suite -->
Once we are done with the execution, the TestNG report will look something like this
A clear sign of the time savings obtained from parallelization can be seen below, in the real-time result, where we can see that parallel test execution is quicker by almost 1.5 times than serialized test execution
Running MBUnit tests in Parallel on Browserstack Automate
BrowserStack Automate can enable users to leverage the power of MBUnit to run tests across various browser, device, and OS configurations in parallel.
For test execution, the steps are as below -
- Clone the mbunit-browserstack repo on GitHub git clone https://github.com/browserstack/mbunit-browserstack.git
- Open MBUnit-BrowserStack.sln in Visual Studio\ Configure the credentials required and the details of the browsers/devices for parallel testing in the App.config file within the mbunit-browserstack/MBUnit-BrowserStack/ directory
- Understand the ROI from automation and get buy-in from the stakeholders
<?xml version="1.0" encoding="utf-8" ?>
- Now the tests are ready to run in parallel on BrowserStack using the following simple steps:
- Building the solution in Visual Studio
- Executing the tests with the fixture parallel from Test Explorer.
- Other capabilities can be added under the environments option as described below:
-
Advantages and Limitations of Parallel Testing
Now that it is discussed when and how to run parallel tests, let us look at some of its advantages and limitations:
Advantages of Parallel Testing
- Speed: As we have demonstrated both theoretically and in practice, sequential automated test execution is time-consuming. Parallel tests can speed up the process significantly as the same test is being run against different configurations at the same time.
- Cost Efficiency: Creating, maintaining, and keeping test environments active need time and effort. However, these overheads are hardly a headache in parallel testing, and the required test environment can be leased when required and updated as per demands. Also, cloud-based test grids allow test execution at high concurrency, reducing the overall cost per test.
- Optimization of CI/CD Processes: The effectiveness of a CI/CD pipeline is determined by its total run time. With feedback loops being of the utmost importance in CI/CD, the sooner the team has a result, the sooner they can apply the fix, refactor, and reiterate the process. Large test suites can be broken up into smaller independent jobs and run in parallel to quickly obtain results, thus improving the performance of the pipeline
Limitations of Parallel Testing
- For the parallel testing of different modules, we need to create independent modules. Also, modules with dependencies cannot be included in the parallel approach.
- For parallelization, one needs to have a detailed understanding of the product and its flow for better results. Even though parallelization can help in cross browser compatibility testing, its coverage of multiple browsers is restricted until and unless it is accompanied by distributed testing where the setup of multiple machines and browsers is provided.
Best Practices of Parallel Testing
None of the advantages of parallel testing will be of any use; however if we fail to follow best practices-
- Make autonomous tests: One test should not depend on the outcome of another, as parallel tests may not occur in any defined order.
- Test one feature at a time: The major advantage of parallel testing is modularization. A test that encompasses many features defeats the purpose of failing fast which is absolutely key to CI/CD pipeline performance.
- Avoid the use of static objects: Using static objects may prevent another test, running in parallel, from manipulating it.
- Resetting the test data: Cross-contamination of data needs to be prevented by modular creation, usage, and tearing down of test data to ensure the test leaves the system the same way it entered it.
In a nutshell
Owing to the need for having access to multiple platforms and browsers to run tests in parallel, the cost of compatibility testing with parallel testing increases. Also, we may come to a point where access to all browsers and versions may not be possible. BrowserStack provides access to numerous platforms and browsers with their corresponding versions on the cloud. One can run automated parallel tests and use multiple browsers and versions.
Posted on April 15, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.