A Beginner's Guide to Testing: Unit, Smoke, Acceptance

crazyvaskya

Vladislav Rybakov

Posted on March 18, 2023

A Beginner's Guide to Testing: Unit, Smoke, Acceptance

Disclaimer

The thoughts and opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the companies the author works or worked for. The company does not endorse or take responsibility for the content of this article. Any references made to specific products, services, or companies are not endorsements or recommendations by the companies. The author is solely responsible for the accuracy and completeness of the information presented in this article. The companies assume no liability for any errors, omissions, or inaccuracies in the content of this article.

Intro

At the moment, I work at Meta, where my team is responsible for implementing and supporting infrastructure for integration tests for backend services. Due to the specific nature of my work, I am motivated to write a series of articles on backend services testing that are as simple as possible, with examples and descriptions that can be useful to everyone, particularly beginners.

Prior to joining Meta, I worked at a dynamic reservoir simulation company and a large technology bank. Due to the importance of their products, having proper and reliable testing was mandatory. The practices involved not only automated testing but also comprehensive manual testing. However, many areas could have been improved with additional automated test coverage, enabling teams to detect bugs earlier and shorten the production cycle.

In this series, I will describe popular testing practices, starting from the most common and simple ones, in terms of their functionality and underlying concepts.

Unit testing

What is Unit Testing?

Unit testing is a software testing technique in which individual units or components of a software application are tested in isolation from the rest of the application. A unit can be a function, a method, a class, or even a module. The purpose of unit testing is to verify that each unit of the software performs as expected and meets its intended use.

Why are Unit Tests Important?

Unit testing is important for the following reasons:

  • Early detection of defects: Unit testing helps to identify defects and issues early in the development cycle, which reduces overall time and cost.
  • Increased confidence in software: Unit testing provides confidence in the software's functionality and performance before it is integrated with other components.
  • Reduction in regression issues: Unit testing helps to ensure that code changes and updates do not introduce new defects or issues.
  • Supports refactoring and maintenance: Unit testing supports refactoring and maintenance of the codebase by ensuring that changes do not break existing functionality.

Pros and Cons of Unit Testing:

Some advantages of unit testing include:

  • Improved code quality and reliability
  • Faster detection and resolution of defects
  • Better maintainability and scalability
  • Increased developer confidence and productivity

Some disadvantages of unit testing include:

  • Maintenance costs
    • Test maintenance can become a significant overhead as the codebase grows, requiring updates to reflect changes in the code.
    • Changes to the system architecture can render unit tests obsolete, requiring significant rework to update the tests.
    • Tests can become brittle and require constant maintenance, particularly if they are tightly coupled to the code they are testing.
  • Difficulty in testing complex systems
    • Unit tests may not cover all possible scenarios that can arise in a complex system.
  • Over-reliance on unit tests at the expense of integration and system-level testing
    • False sense of security that the code is bug-free.
    • Limited scope, as unit tests only test code in isolation and not the interactions between different components.
    • Expensive rework if issues are only discovered at the integration or system-level.
    • Integration and system-level testing can catch issues that unit tests may miss, such as performance problems, security issues, or compatibility issues.

Useful Techniques for Proper Testing:

  • Dependency Injection: This technique involves passing dependencies to the code being tested through its constructor, method parameters or properties. This allows for easy substitution of the dependencies with mock objects during testing.
  • Test Data Builders: This technique involves creating test data objects with default values that can be overridden as needed for specific tests. This makes it easy to create test cases with different data scenarios.
  • Code Coverage Analysis: This technique involves measuring how much of the code is executed during testing. It helps to identify areas of the code that are not being tested and can help to improve the overall quality of the code.
  • Property-based Testing: This technique involves generating a large number of test cases based on a set of properties that the code is expected to satisfy. This helps to catch edge cases and corner cases that may not be covered by a smaller set of manually created test cases.
  • Mutation Testing: This technique involves making small changes to the code being tested to create new versions, and running the test suite against each version. This helps to identify weaknesses in the test suite by measuring how many of the mutated versions are still passing the tests.
  • Mock Objects: This technique involves creating fake objects that mimic the behavior of real objects in order to test the interaction between the code being tested and its dependencies. Mock objects can be used to simulate behavior of external services, databases, or complex dependencies.
  • OmniMock: This technique involves using a tool that can automatically generate mock objects for all the dependencies of a given code module. This can help to reduce the amount of boilerplate code needed to write test cases, and make it easier to test complex code with many dependencies.

How to Automate Unit Testing:

Automation of unit tests is essential at every stage of the development process, but it becomes increasingly critical as the codebase grows in size and complexity. Automated unit tests can help catch bugs early in the development process, reducing the cost and time required for debugging and fixing issues. Additionally, automation enables developers to run tests quickly and efficiently, allowing for faster feedback on code changes.

Here are some best practices for automation of unit tests:

  • Keep the tests lightweight. Unit tests should be quick and easy to run. Keep the tests focused on a small piece of functionality and avoid testing multiple components or systems at once. By keeping the tests lightweight, developers can run them frequently, catch issues early, and ensure that the tests remain effective.
  • Use mocking and stubbing to isolate the unit being tested from its dependencies. This ensures that the tests remain lightweight and quick to run.
  • Integrate testing into the development process, so they run automatically every time the code changes. This ensures that developers receive immediate feedback on the code changes they have made, and any issues are caught early in the process.
  • Use a continuous integration tool such as Jenkins or Travis CI to automate the testing process. This tool can run the tests automatically and provide immediate feedback on any issues. By automating the testing process, developers can focus on writing code rather than manually running tests.
  • Use version control software such as Git to track changes to the codebase. This makes it easy to see who made changes, when they were made, and what changes were made. Developers should ensure that tests are run before any changes are merged, preventing any broken code from being merged into the codebase.

Example

In the example below, we can notice that even a simple function like 'divide' can be tested in many ways to ensure that it works properly and its expected behavior is not affected by any changes made to its code. The list of tests may not be comprehensive, but it provides a nice demonstration of the importance of proper testing.

import unittest

def divide(a, b):
    return a / b

class TestDivide(unittest.TestCase):
    def test_divide_by_positive_integers(self):
        self.assertEqual(divide(10, 2), 5)
        self.assertEqual(divide(4, 2), 2)

    def test_divide_by_zero(self):
        self.assertRaises(ZeroDivisionError, divide, 10, 0)

    def test_divide_by_float(self):
        self.assertAlmostEqual(divide(1, 3), 0.33333333333, places=10)

    def test_zero divident(self):
        self.assertEqual(divide(0, 1), 0)

    def test_self_division(self):
        self.assertEqual(divide(1, 1), 1)

    def test_negative_division(self):
        self.assertEqual(divide(-10, -2), 5)
        self.assertEqual(divide(-10, 2), -5)
        self.assertEqual(divide(10, -2), -5)

    def test_divide_by_nonnumeric_input(self):
        self.assertRaises(TypeError, divide, "10", 2)
        self.assertRaises(TypeError, divide, 10, "2")

    def test_divide_overflow(self):
        self.assertRaises(OverflowError, divide, float("inf"), 1)

    def test_large_integer_division(self):
        self.assertEqual(divide(999999999999999, 3), 333333333333333)
        self.assertEqual(divide(-999999999999999, 3), -333333333333333)
Enter fullscreen mode Exit fullscreen mode

Test cases' description
  • self.assertEqual(divide(10, 2), 5) and self.assertEqual(divide(4, 2), 2) - These tests check whether the divide method returns the correct value when we divide 10 by 2 or 4 by 2, which should be 5 or 2 respectively.
  • self.assertRaises(ZeroDivisionError, divide, 10, 0) - This test checks whether the divide method raises a ZeroDivisionError when we attempt to divide 10 by 0, which is an invalid operation.
  • self.assertAlmostEqual(divide(1, 3), 0.33333333333, places=10) - This test checks whether the divide method returns a value that is close to the expected value, with a tolerance of 10 decimal places. This is useful for cases where the expected value is a decimal or a fraction.
  • self.assertEqual(divide(0, 1), 0) - This test checks whether the divide method returns 0 when we divide 0 by a non-zero number.
  • self.assertEqual(divide(1, 1), 1) - This test checks whether the divide method returns 1 when we divide a number by itself.
  • self.assertEqual(divide(-10, -2), 5) - This test checks whether the divide method works correctly with negative numbers.
  • self.assertEqual(divide(-10, 2), -5) and self.assertEqual(divide(10, -2), -5) - These tests check whether the divide method works correctly with mixed signs.
  • self.assertRaises(TypeError, divide, "10", 2) and self.assertRaises(TypeError, divide, 10, "2") - These tests check whether the divide method raises a TypeError when we pass in non-numeric arguments.
  • self.assertRaises(OverflowError, divide, float("inf"), 1) This test case checks whether the divide method raises an OverflowError when we try to divide a very large number by 1.
  • self.assertEqual(divide(999999999999999, 3), 333333333333333) - This test case checks whether the divide function correctly handles large positive integers.
  • self.assertEqual(divide(-999999999999999, 3), -333333333333333) - This test case checks whether the divide function correctly handles large negative integers.

By including these additional test cases, we can be more confident that the divide method works correctly and handles different types of input values.


Conclusion

In conclusion, unit testing is an important software testing technique that helps to ensure the quality and reliability of software applications. Unit tests can be performed in many programming languages, and unit testing frameworks and libraries make it easier to write and run unit tests. Automated testing tools and frameworks can help to streamline the process of writing and running unit tests, which can save time and improve overall efficiency.

Smoke testing

What is Smoke Testing?

Smoke testing, also known as "Build Verification Testing," is a type of software testing that verifies the basic functionality of an application. The purpose of smoke testing is to ensure that critical features of the software are working correctly and to detect any major defects before performing more in-depth testing.

The term "smoke testing" comes from the hardware testing, where electronic devices would be turned on for the first time, and if they didn't catch on fire, they would "smoke test" the device. Similarly, in software testing, smoke testing refers to the quick test to see if the system catches fire (crashes) before further testing.

Smoke tests are typically executed after a new build of the software is completed, and the tests are designed to verify that the software can perform its basic functions correctly. These tests are generally performed manually or automated, and they cover the most critical features of the software.

Examples of Smoke Testing:

Here are some examples of smoke testing scenarios:

  • Verify that the application can launch without crashing
  • Check that the login functionality works correctly
  • Confirm that the database connection is working
  • Ensure that data can be saved and retrieved from the database
  • Verify that critical UI elements are visible and functional

Why Smoke Tests are Important:

Smoke testing is essential because it catches critical defects early in the development process. By identifying defects early, developers can fix them before they become more significant issues that require more time and resources to resolve.

Smoke testing can also help improve the quality of the software by identifying issues that might not be caught by other testing methods. This can ultimately save time and money, as well as improve the user experience of the application.

Smoke tests focus on verifying the basic functionality of a system, usually at a high level. They are often used to quickly identify significant issues that may prevent the application from working correctly and are run before more in-depth testing to catch any major problems early. While smoke tests are useful for detecting major issues in the application's overall functionality, they do not provide the same level of detail as unit tests when it comes to verifying the correctness of individual code units. Conversely, unit tests may not catch issues that arise when multiple code units are combined, which is where smoke tests come in handy.

Pros and Cons of Smoke Testing:

Some advantages of smoke testing include:

  • Identifies critical defects early in the development process
  • Helps improve the quality of the software
  • Saves time and money by identifying issues early
  • Can improve the user experience of the application

Some disadvantages of smoke testing include:

  • Only covers basic functionality
  • Can give a false sense of security if not followed up by more thorough testing
  • Requires time and resources to set up and execute
  • May miss some defects that are not apparent during smoke testing

How to Automate Smoke Testing:

Automating smoke testing can help reduce the time and effort required to execute smoke tests. Here are some steps to follow to automate smoke testing:

  • Identify the critical features of the software to test
  • Select an automation tool or framework
  • Write test scripts to automate the tests
  • Integrate testing into the development process
    • Schedule the tests to run automatically after each build
    • Use a continuous integration tool such as Jenkins or Travis CI
    • Use version control software such as Git to track changes to the codebase.

Example

Suppose we have a RESTful API service built using Python and Flask framework. The service has several endpoints, including a GET endpoint that retrieves a list of products from a database.

We can create a smoke test that verifies the following:

  • The service can start without errors.
  • The GET endpoint returns a response with a status code of 200.
  • The response from the GET endpoint contains a list of products.

Here's an example of how to implement a smoke test for this Python service using the PyTest library:

  • Install PyTest using pip:
$ pip install pytest
Enter fullscreen mode Exit fullscreen mode
  • Create a test file named test_smoke.py in the root directory of your project.
  • Import the necessary modules:
import requests
import json
Enter fullscreen mode Exit fullscreen mode
  • Define a fixture that starts the service:
import pytest
from myapp import create_app

@pytest.fixture(scope="session")
def app():
    app = create_app()
    app_context = app.app_context()
    app_context.push()
    yield app
    app_context.pop()
Enter fullscreen mode Exit fullscreen mode

Here, we are creating a fixture named app that starts the Flask app by calling the create_app() function defined in myapp module.

  • Define a test function that sends a GET request to the service and verifies the response:
def test_get_products(app):
    response = requests.get('http://localhost:5000/products')
    assert response.status_code == 200
    data = json.loads(response.text)
    assert isinstance(data, list)
    assert len(data) > 0
Enter fullscreen mode Exit fullscreen mode

Here, we are sending a GET request to the /products endpoint and verifying that the response has a status code of 200. We are also checking that the response contains a list of products.

Run the test using PyTest:

$ pytest test_smoke.py
Enter fullscreen mode Exit fullscreen mode

This will execute the test and output the results. If the test passes, you should see a message like this:

============================= test session starts ==============================
collected 1 item

test_smoke.py .                                                         [100%]

============================== 1 passed in 0.12s ==============================
Enter fullscreen mode Exit fullscreen mode

If the test fails, PyTest will provide detailed information about the failure.

By running this test after each build, we can ensure that the service is functioning correctly before moving on to more in-depth testing.

Conclusion

Smoke tests are a type of testing that focuses on verifying the basic functionality of a system, usually at a high level. They are often used to quickly identify significant issues that may prevent the application from working correctly and are run before more in-depth testing to catch any major problems early. While smoke tests may not be as detailed as other types of testing, they serve an essential purpose in the development process.

It's important to note that automated smoke testing should not be the only testing method used. It's still essential to perform more in-depth testing to identify all possible defects.

Acceptance testing

What is acceptance testing?

Acceptance testing is a type of software testing that is performed to verify whether a software application meets the specified requirements and is ready to be deployed to production. It is usually performed after unit testing and integration testing and before the software is released to end-users.

In contrast to smoke tests, acceptance tests are designed to test whether the software meets the requirements and specifications of the stakeholders or end-users. Acceptance tests are typically run after the development phase is complete and before the software is released to the end-users. The goal of acceptance testing is to ensure that the software is suitable for release and that it meets the expectations of the stakeholders. Acceptance testing is usually done manually, and it may involve creating test cases based on user stories, user workflows, or other specifications.

Examples of acceptance tests include:

  • User acceptance testing (UAT) - where end-users test the software to verify that it meets their needs and requirements.
  • Business acceptance testing (BAT) - where stakeholders from the business side of the organization test the software to ensure that it aligns with business objectives and processes.
  • Operational acceptance testing (OAT) - where the software is tested in a production-like environment to ensure that it can be deployed and operated smoothly.

Why acceptance tests are important:

  • Ensure that the software meets requirements: Acceptance testing is crucial in ensuring that the software meets the requirements specified by stakeholders and end-users.
  • Prevent defects from reaching production: Acceptance testing helps to identify defects early in the development process, which can save time and money by preventing costly defects from reaching production.
  • Improve software quality: Acceptance testing helps to improve the quality of software by identifying defects and ensuring that the software functions as expected.
  • Increase stakeholder confidence: Stakeholders, including end-users, business owners, and project managers, gain confidence in the software's functionality when acceptance tests are successfully passed.

Pros and cons of acceptance testing:

Some advantages of acceptance testing include:

  • Increases software quality.
  • Prevents defects from reaching production.
  • Improves stakeholder confidence.
  • Provides a clear indication of when the software is ready for release.

Some disadvantages of acceptance testing include:

  • Can be time-consuming and expensive.
  • Requires significant planning and coordination with stakeholders.
  • Testing may not be exhaustive and may not uncover all defects.
  • The results may be subjective and depend on the interpretation of stakeholders.

How to automate acceptance testing:

  • Define test scenarios that cover the software's key functionality and requirements.
  • Write test scripts that automate the test scenarios and verify the software's functionality.
  • Run the automated tests to verify the software's functionality and identify defects.
  • Repeat as needed. Continuously update and refine the test scenarios and test scripts to ensure that the software remains functional and meets stakeholder requirements.
  • Integrate testing into the development process
    • Schedule the tests to run automatically after each build
    • Use a continuous integration tool such as Jenkins or Travis CI
    • Use version control software such as Git to track changes to the codebase.

Example

Suppose we have a trading platform that allows users to buy and sell currency pairs. We want to test the functionality of placing a market order to buy USD using EUR, with a given exchange rate.

def place_market_order(
    usd_balance: float, eur_balance: float, exchange_rate: float, order_quantity: float
) -> tuple[float, float]:
    cost = order_quantity / exchange_rate
    updated_usd_balance = usd_balance - cost
    updated_eur_balance = eur_balance + order_quantity
    return updated_usd_balance, updated_eur_balance
Enter fullscreen mode Exit fullscreen mode

Here's how we can write an acceptance test for this scenario using PyTest:

import pytest

def test_market_order_usd_eur():
    # Arrange
    usd_balance = 1000.00
    eur_balance = 500.00
    exchange_rate = 1.20

    # Act
    updated_usd_balance, updated_eur_balance = place_market_order(
        usd_balance, eur_balance, exchange_rate, order_quantity=250.00
    )

    # Assert
    assert updated_usd_balance == 700.00
    assert updated_eur_balance == 750.00
Enter fullscreen mode Exit fullscreen mode

Test description
In this example, we first define a test function called test_market_order_usd_eur(). In the Arrange section, we set up the initial balance of USD and EUR for the user, as well as the current exchange rate between USD and EUR.

In the Act section, we simulate placing a market order to buy USD using EUR, with a quantity of 250.00. We calculate the cost of this order in EUR using the current exchange rate, subtract the cost from the user's USD balance, and add the quantity to the user's EUR balance.

Finally, in the Assert section, we verify that the user's balances have been updated correctly according to the exchange rate and the order quantity. Specifically, we check that the USD balance has been reduced by the correct amount, and the EUR balance has been increased by the correct amount.

We can then run this test using PyTest, and it will automatically execute the code to place the market order and verify that the user's balances have been updated correctly. If the test passes, we can be confident that the trading platform is working correctly and meets our acceptance criteria for placing a market order to buy USD using EUR.


Differences Between Acceptance Testing and Unit Testing.

Attentive readers may notice that the example provided above looks similar to the practice of unit testing. However, acceptance testing differs from unit testing in several ways.

  • Scope: acceptance testing focuses on testing the system as a whole, while unit testing focuses on testing individual units or components of the system in isolation.
  • Purpose: the purpose of acceptance testing is to ensure that the system meets the requirements and expectations of stakeholders, while the purpose of unit testing is to catch bugs and ensure that individual components of the system are working correctly.
  • Collaboration: acceptance testing typically involves collaboration with stakeholders and end-users to write and execute tests that simulate real-world usage scenarios, while unit testing is typically performed by developers in isolation from stakeholders and end-users.
  • Level of automation: while acceptance testing can be manual or automated, it is often automated to ensure repeatability and consistency. Unit testing, on the other hand, is typically automated to ensure efficiency and catch regressions quickly.

In summary, acceptance testing and unit testing serve different purposes and are executed at different levels of the system. Acceptance testing focuses on ensuring that the system meets the requirements and expectations of stakeholders, while unit testing focuses on catching bugs and ensuring that individual components of the system are working correctly.

Conclusion

In conclusion, acceptance testing is a critical component of software development that ensures the software meets stakeholder requirements and functions as intended. By automating acceptance testing, teams can save time and reduce costs while improving software quality and stakeholder confidence.

Popular Libraries for Testing:

Testing frameworks and libraries make it easier to write and run unit tests. Here are some popular examples of unit testing libraries for several programming languages:

  • C/C++: CppUTest, CppUnit, Google Test.
  • Python: PyTest, Behave, Robot Framework.
  • Java: JUnit, TestNG, Cucumber.
  • Golang: GoConvey, GoTest, Ginkgo.
💖 💪 🙅 🚩
crazyvaskya
Vladislav Rybakov

Posted on March 18, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related