Jesse Phillips
Posted on June 11, 2019
I have started wonder what are some general indicators that a testing effort should be reevaluated as it is likely the testing is doing harm. Flacky tests would be an easy goto but I thought there had to be more to what I've seen. This is written with automation in mind but may be applicable to manual testing.
As I tried explaining some testing challenges to a coworker I realized the key indicator of harmful tests. At a high enough view a test consists of an input and an expectation. This gets complicated when the environment is part of the input, or when multiple expectations exist for an input.
If you're unable to determine what the test is expecting as input or expectations it would likely be a good to improve, maybe it is causing some flakiness.
The primary concern is when you're input or expected results are continuously in flux. It does not really matter if this is changing the environment or a json file in the test framework. I need to be clear I'm talking about the designated inputs and results here by giving some examples.
Sometimes you will have failures because your validation relies on some form of content, text of a button or link. This can be fine for a lot of tests, but if the text is changing from one release to another then it is time to stop those changes or apply a different validation. (content may need testing but there needs to be clarity of ownership and what is tested)
I often mock out and test with APIs. Sometimes these change, say from xml to json. I use the native API format to drive my test. This would mean my input would be changing from xml to json. However I don't concern myself with it because this isn't a constant change. I can mostly rely on the testing to report back without my intervention. If however the structure really is in constant flux it is important to optimize for this change assuming the initial development exploration is finished.
Why is constant changing input and expectation so bad? The tests you write are intended to identify failure to meet expectations across changes. If your expected result or input needs modified every release you need to evaluate if your test needs updated or if there is a regression. I see this leading to an optimized work flow for "fixing" the test by modifying it to result in passing.
Now in those cases the system is changing and so the tests can be looked at as providing insight into those changes. But I would wonder, is that extra insight? Should there not already be documented changes about this? Would it not be better time spent getting real insight into how the system functions, risk identification and setup tests that cover a different part of the system.
Posted on June 11, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.