Real world testing: Using business and technical requirements to know what to test

ccleary00

Corey Cleary

Posted on January 22, 2019

Real world testing: Using business and technical requirements to know what to test

Originally published at coreycleary.me. This is a cross-post from my content blog. I publish new content every week or two, and you can sign up to my newsletter if you'd like to receive my articles directly to your inbox! I also regularly send cheatsheets and other freebies.

This is the next post in my ongoing series on knowing what to test in your Node applications by applying recipes to real-world scenarios.

In the first one, we went over scenarios you should cover in your tests when calling a database from a Node service.

And in the second one, we covered what scenarios to write tests for when your applications calls out to an external API.

This post will be a slight departure from the previous two, but still cover the same ground.

Ultimately, the goal of this series is to identify common things you should test in certain scenarios. Because it's often difficult to know exactly what to test. And there are enough common scenarios you'll encounter as a developer that it's possible to identify a common set of things to write tests for. So that way you won't have to waste time scratching your head trying to figure out what tests you should write when you have lots of other development work to get done.

But what about when you run into a scenario that's outside the ones covered in this series? Or what about scenarios that are more specific to the uniqueness of your application?

Using requirements to derive your tests

The solution lies in using documented business and technical requirements to derive your tests.

That might seem deceptively simple, but it will get you very far in writing your tests. And that's good, because we want writing tests to be easy!

As for terminology - there is no clear line that differentiates what a "business" requirement is from what a "technical" requirement is. For the purposes of this post we'll roughly define them as:

  • Business: written from the point of view of the user (usually written by a Business Analyst or Product Manager)
  • Technical: written from the point of view of the application (usually written by a Developer, System Analyst, or Architect)

And, obviously, sometimes developers wear multiple hats, and there are no BA's or PM's, but a developer could still be coming up with requirements that fall into one or both of the above categories.

Who wrote the requirements is not important, but understanding they will likely fall into one of those categories is important because how they're written will likely affect the test cases you'll derive from them.

On that note: what if you don't actually have any defined requirements from a Business Analyst or Product Manager to work off of? Or what if this is a side project for you, and you don't have a BA or PM at all? I'd argue you always have requirements, even if they're "unspoken" or "implied" instead of being documented somewhere in a project planning tool like Jira. And there are ways to figure out what those "explicit" requirements are, but that's outside the scope of this post for now.

The process

  1. Translate the requirement into technical/code requirement(s)
  2. Write test(s) for what should happen - happy path (normal use case, no errors, no empty things returned, etc.)
  3. Write test(s) for what should not happen - unhappy path (error returned, item not found, empty array returned, etc). Note: might not always be applicable!

The first step is what every developer does when they are implementing a feature. Taking a requirement, then thinking about how you would implement it in code at a high level is basically how software gets written. But I included it to point out that you might sometimes get a requirement that you can't immediately write a test from, but instead need to "break down" first. The tests are still derived from the requirement but you might have some preliminary work to do first.

For example, imagine you have a requirement along the lines of "user should be able to login to the application". Obviously this is a very broad requirement. When you go to implement this in code, chances are you aren't going to just write a test that says "should be able to log into application" (although this would make for a good end-to-end test, which I will get into a little bit later) before you've broken that down a bit.

What you'd likely do instead is break that down into some technical/code requirements, like "OK, if a user should be able to log into an application, first I need to have a server route that checks their username/password and logs them in. I also need to have inputs on the UI that take the username and password and sends them to the server through a form. Those inputs probably need to have some client-side validation to make sure the user can't pass in bad characters that will blow up our database, check for cross-site scripting, etc).

Now, even though we didn't write a test from that requirement directly, we based the tests off of that requirement and the requirements we derived from it.

For some requirements, especially if they are more technical in nature, you'll be able to write the tests basically directly from the requirement without much "prep work". But the process remains the same.

I recommend always writing the happy path test first, then the associated unhappy path test. The happy path is ideally what the user should encounter the most, so we want to design the code to meet that first. Then when we implement the unhappy path, assuming that's relevant for our requirement, if we need to make any changes to our code to support that we can.

And a note on terminology - "unhappy path" as you'll sometimes hear it used means the "error" path, as in errors thrown/servers down/etc. The way I'm using it here is a bit more broadly, to describe not only errors but to include things like an empty array returned (maybe the database returned no results) and things that shouldn't happen, in the context of the requirement. For example, an non-logged in user shouldn't be able to save preferences.

It might seem deceptively simple, but this is the process I use to generate plenty of meaningful tests. Might seem obvious but writing tests that mean something is what matters most.

And the nice thing about this is that it works for both UI and Server-side tests.

Walking through an example

Let's walk through an example.

The example requirements will be for a feature we're building to fetch products for an Amazon-like web store:

"As a user, I should be able to:"

  • View up to 50 products on the page, based on what I searched for
  • Sort by newest products first
  • See the products that are on sale displayed first, ahead of the others that aren't on sale
  • If I am logged in, I should be able to see items in my favorites

I'm going to list the tests I would write for the first and second requirements, and you can use the third and fourth to practice.

And note that while there could be many different possible tests you could write, these are ones I came up with on a first pass. You might think of more!

I'll also kind of narrate the process for the first couple of tests so you can get an idea of how I would think about it:

OK, the requirement is that the user needs to be able to view 50 products on the page based on a search term. This isn't too broad, but what does this look like from a technical perspective? How would this translate to a technical requirement? Obviously I need the server to pull 50 items from the database based on the search term. That sounds like a good technical requirement.

it('should return 50 products max based on searched item')

Now, what if there aren't 50 items returned, but, say, 23?

it('should return as many products as search finds')

Great, now about what if no items are returned?

it('should return empty array if no products found')

The UI also need to show those 50 products, that's probably more of an integration or E2E test, but I should probably write it:

it('should display 50 products')

Now we're off to the races...

The "complete" list of tests:

// Node
// unit/integration
it('should return 50 products max based on searched item')

it('should return as many products as search finds') // for cases where search finds products but less than 50

it('should throw error if service hits error when calling database')

it('should return empty array if no products found')

// E2E/integration
it('should return 200 HTTP status code with the products')

it('should return 500 HTTP status code if service throws error')

// UI
it('should display 50 products')

it('should display error page on UI if API call returns 500')

it('should display message if search resulted in no items found')
// UI
// sorting function, doesn't really have an "unhappy" state
it('should sort by date descending')

You can see that from the fairly small list of requirements, we've already generated lots of different test cases! And more importantly, test cases that should prevent future changes from breaking the code and make us confident that the application is functioning as the user expects.

As mentioned in the process section, not all requirements will have an "unhappy" path. If you missed it above, the sorting function is a good example of this. There isn't really a "bad" sort in our case, so we're fine with just having the "happy" path, normal-functioning test.

Different types of tests these requirements will produce

As discussed above, requirements can be broad and sometimes you'll have to break them down a bit before you're able to start writing the tests. But you do have a starting place.

And when you're writing your tests from these requirements, different types of tests will come out: Unit tests, integration tests, end-to-end (E2E) tests.

Referring back to the previous example of a user being able to log in to the application, I mentioned that although the base requirement was broad and we probably wouldn't write our first tests based on that alone. But it is helpful to write tests for the base requirement after you've written tests for your smaller, more broken down requirements.

Smaller requirements will generally generate unit tests, and broader requirements are good for generating integration and/or E2E tests. A test for our broader requirement, in this case that a user can log in, could be covered by an E2E test, the test case being taken directly from our original requirement - "User should be able to log in to the application". This test case is an E2E test because it will likely hit all of our code - the UI, the form, the server route, the database, etc.

A comment on TDD

Lastly, even though you see a "complete" list above, don't be fooled. In order to give an example of the process/approach, obviously I wanted to show a full list of test cases.

But when it comes to actual development, you shouldn't write all your tests up front and then go about writing the accompanying code. Sometimes people think this is how TDD (Test Driven Development) is done, but that is not the case.

Whether you're doing TDD (writing the test first) or writing the tests after, follow the process outlined in this post and don't write all the tests up front :)

Takeaways

Next time you're not sure what to actually write a test for, try out the process discussed here:

  1. Translate the requirement into technical/code requirement(s)
  2. Write test(s) for what should happen
  3. Write test(s) for what should not happen, if applicable

I think you'll find writing tests becomes much easier when you're using the requirements to derive your test cases.

And as I mentioned in the beginning of this post, this is part of an ongoing series on knowing what to test in your JS/Node applications by applying recipes to real-world scenarios.

There are more testing posts on the way - if you want to learn not only how to test but what to test, sign up to the newsletter to be notified as soon as the next post is released!

💖 💪 🙅 🚩
ccleary00
Corey Cleary

Posted on January 22, 2019

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related