7 pytest Features and Plugins That Will Save You Tons of Time

miguendes

Miguel Brito

Posted on October 10, 2020

7 pytest Features and Plugins That Will Save You Tons of Time

In this tutorial, we'll learn the best pytest features and plugins to speed up your development process. They're very simple and you can start using them right away.

Table of Contents

  1. How to Stop a Test Session on the First Failure
  2. How to Re-Run Only the Last Failed Tests
  3. How to Re-Run the Whole Test Session Starting With Last Failed Tests First
  4. How to Display the Local Variables of a Failed Test
  5. How to Run Only a Subset of Tests
  6. How to Run Tests in Parallel
  7. How to Re-Run Flaky Tests and Eliminate Intermittent Failures
  8. Conclusion

How to Stop a Test Session on the First Failure

Running the full test suite of a big project may take a long time. No matter if you’re executing them locally or on a CI server, it’s always frustrating to see a test failing after waiting patiently. On certain occasions, you might wish to abort the entire session after the first failure so you can fix the broken test immediately. Fortunately, pytest comes with a very convenient CLI option for that, the -x / --exitfirst.

$ pytest -x tests/

How to Re-Run Only the Last Failed Tests

When developing locally, you might prefer to run all the tests before pushing to your repo. If you’re working on a small project with just a few tests, it’s OK to re-run all of them. However, if the full test suite takes minutes to run, you’ll probably want to execute only the ones that failed. pytest allows that via the --lf or --last-failed option. This way you can save precious time and iterate much quicker!

$ pytest --lf tests/

How to Re-Run the Whole Test Session Starting With Last Failed Tests First

Analogous to the preceding command, it might be helpful to re-run everything. The only difference is that you want to start with the failed test first. And to accomplish that, use the --ff or --failed-first flag.

$ pytest --ff tests/

How to Display the Local Variables of a Failed Test

We’ve learned how essential it is to iterate faster and how it can save you precious time. In the same fashion, it’s very crucial that we can pick up key hints that will help us debug the failed tests. By using the --showlocalsflag, or simply -l, we can see the value of all local variables in tracebacks.

$ pytest tests/test_variables.py -l               
================ test session starts ================
...                                                                                                                                                
tests/test_variables.py FF                                                                                                                                                     [100%]

================ FAILURES ================
_____________________________________________________________________________ test_local_variables[name] _____________________________________________________________________________

key = 'name'

    @pytest.mark.parametrize("key", ["name", "age"])
    def test_local_variables(key):
        result = person_info()
>       assert key in result
E       AssertionError: assert 'name' in {'height': 180}

key        = 'name'
result     = {'height': 180}

tests/test_variables.py:11: AssertionError
_____________________________________________________________________________ test_local_variables[age] ______________________________________________________________________________

key = 'age'

    @pytest.mark.parametrize("key", ["name", "age"])
    def test_local_variables(key):
        result = person_info()
>       assert key in result
E       AssertionError: assert 'age' in {'height': 180}

key        = 'age'
result     = {'height': 180}

tests/test_variables.py:11: AssertionError
================ short test summary info ================
FAILED tests/test_variables.py::test_local_variables[name] - AssertionError: assert 'name' in {'height': 180}
FAILED tests/test_variables.py::test_local_variables[age] - AssertionError: assert 'age' in {'height': 180}
================ 2 failed in 0.05s ================

How to Run Only a Subset of Tests

Sometimes you need to run just a subset of tests. One way of doing that is running all the test cases of an individual file. For example, you could do pytest test_functions.py. Although this is better than running everything, we can still improve it. By using the -k option, one can specify keyword expressions that pytest will use to select the tests to be executed.

# tests/test_variables.py
def test_asdict():
    ...

def test_astuple():
    ...

def test_aslist():
    ...

Say that you need to run the first two tests, you can pass a list of keywords separated by or:

$ pytest -k "asdict or astuple" tests/test_variables.py

Output:

$ pytest -k "asdict or astuple" tests/test_variables.py
==================================== test session starts ====================================
...                                          

tests/test_variables.py ..                                                            [100%]

============================== 2 passed, 1 deselected in 0.02s ==============================

How to Run Tests in Parallel

The more tests a project has, the longer it takes to run all of them. This sounds an indisputable statement, but it’s commonly overlooked. Running an extensive test suite one test after the other is an incredible waste of time. The best way to speed up the execution is by parallelizing it and taking advantage of multiple CPUs.

Sadly, pytest doesn’t have similar feature, so we must fall back on plugins. The best pytest plugin for that is pytest-xdist.

To send your tests to multiple CPUs, use the n or --numprocesses option.

$ pytest -n NUMCPUS

If you don’t know how many CPUs you have available, you can tell pytest-xdist to run the tests on all available CPUs with the auto value.

$ pytest -n auto

How to Re-Run Flaky Tests and Eliminate Intermittent Failures

One of the most disheartening situations is to see all tests passing locally only to fail on the CI server. Several reasons can cause failures like these, but mostly they’re a result of a “flaky” test. A “flaky” test is one that fails intermittently, in a non-deterministic manner. Usually, re-running them is enough to make them pass. The problem is, if you have a long running test suite, you need to re-trigger the CI step and wait several minutes. This is a great time sink and fortunately can be avoided.

So, to improve that, it is ideal that we can re-rerun the “flaky” test. By doing so, we can increase the chance of making it pass, avoiding the complete failure of the CI step.

The best pytest plugin for that is the pytest-rerunfailures. This plugin re-runs tests as much as we want, thus eliminating intermittent failures.

The simplest way to use it is to pass the --reruns option with the maximum number of times you’d like the tests to run.

$ pytest --reruns 5

If you know ahead of time that an individual test is flaky, you can just mark them to be re-run.

@pytest.mark.flaky(reruns=5)
def test_flaky():
    assert get_resut() is True

Conclusion

A large test suite can bring a lot of assurance to a project but it also comes with a cost. Long running testing sessions can be eat up a lot of development time and make iterations slower. By leveraging pytest features, and its plugin ecosystems, it’s possible to speed up the development process dramatically. In this tutorial, we looked at 7 tips we can adopt to improve our lives and waste less time executing tests.

If you liked this post, consider sharing it with your friends! Also, feel free to follow me https://miguendes.me.

7 pytest Features and Plugins That Will Save You Tons of Time first appeared on miguendes's blog.

💖 💪 🙅 🚩
miguendes
Miguel Brito

Posted on October 10, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related