Testing redux reducers - leveraging selectors

ph1

Pete Hodgson

Posted on August 9, 2019

Testing redux reducers - leveraging selectors

This post is part of a series, covering techniques for testing redux reducers. In the first installment of the series We saw that it's better to test reducers and actions together as an integrated, cohesive unit. In this post we'll take this idea a little further and see what happens when we pull selectors into our reducer testing strategy. Specifically, we're going to see whether we can make our reducer unit tests even more expressive, by validating our expectations of redux state using selectors. We'll also look briefly at the inverse - using reducers to make our selector unit tests more expressive.

Completing some todos

Let's start by looking at an example reducer test to see whether we can improve it. As with the other posts in this series, we'll be testing a reducer that manages a list of todos. Specifically, in this example we're testing how we track which todos are complete:

import reducer, {addTodo,completeTodo} from './todos';

describe('todos duck', () => {
  describe('completing todos', () => {
    it('initializes new todos as not completed', () => {
      const todoId = 'todo-a';

      const initialState = undefined;
      const nextState = reducer(
        initialState,
        addTodo({id:todoId,text:'blah'})
      );

      expect(nextState).toEqual([
        {
          id: todoId,
          text: 'blah',
          completed: false
        }
      ]);
    });

    it('allows us to mark an existing todo as completed', () => {
      const todoId = 'todo-a';

      let state = undefined;
      state = reducer(
        state,
        addTodo({id:todoId,text:'blah'})
      );
      state = reducer(
        state,
        completeTodo({id:todoId})
      );

      expect(state).toEqual([
        {
          id: todoId,
          text: 'blah',
          completed: true
        }
      ]);
    });
  });
});

Enter fullscreen mode Exit fullscreen mode

These tests do the job and are pretty readable, although they're a little verbose.

A reader looking at the expectations at the end of these tests might need a moment to understand which part of the expectation is relevant to these specific tests. Is the value of text important here? The answer is no. In the context of this test we don't care at all about that field. However, since we're using toEqual to verify our expectations we have to specify the value of every field, even though we only actually care about the completed field. Put another way, our tests are over-specified.

Over-specified tests are fragile tests

Not only are over-specified tests distracting, but they are also more fragile. Our expectations are explicitly checking every aspect of our state, which means that if we make any changes to that state shape we'll have to touch a bunch of tests that aren't relevant to the change. Let's say we decided to track creation time for each todo by adding a createdAt timestamp. This new field would break the over-specified expectations in these tests, and we'd have to come back and update each one.

There are a few options for resolving this. Instead of checking the entirety of the state, we could instead write multiple expectations which check the individual fields we care about - but that can get unwieldy pretty quickly. We could also look at using some more advanced features of Jest's matchers:

import reducer, {addTodo,completeTodo} from './todos';

describe('todos duck', () => {
  describe('completing todos', () => {
    it('initializes new todos as not completed', () => {
      const todoId = 'todo-a';

      const initialState = undefined;
      const nextState = reducer(
        initialState,
        addTodo({id:todoId,text:'blah'})
      );

      expect(nextState).toEqual([
        expect.objectContaining({     // 👈 fancier expectatations
          id: todoId,
          completed: false
        })
      ]);
    });

    it('allows us to mark an existing todo as completed', () => {
      const todoId = 'todo-a';

      let state = undefined;
      state = reducer(
        state,
        addTodo({id:todoId,text:'blah'})
      );
      state = reducer(
        state,
        completeTodo({id:todoId})
      );

      expect(state).toEqual([
        expect.objectContaining({     // 👈 fancier expectatations
          id: todoId,
          completed: true
        })
      ]);
    });
  });
});

Enter fullscreen mode Exit fullscreen mode

This is perhaps a little nicer. We are no longer distracted by an expectation verifying the value of the text field when it's not relevant to the test at hand. More importantly, these tests are more resilient to unrelated changes - we wouldn't need to touch these tests if we added that createdAt timestamp to our state, for example.

However, we're also making our expectation statements more complex, and pulling in features that someone reading the code may well not be familiar with.

Moving to selectors

Let's try taking a different tack. Conceptually, what we're trying to do in these tests is verify the completed-ness of a todo after certain operations. If we were writing production code that wanted to look at the completed-ness of a todo, we wouldn't go spelunking around in our redux state directly - we'd use a selector to extract that information. For example, here's an isTodoCompleted selector:

export function isTodoCompleted(state,todoId){
  const targetTodo = state.find( (todo) => todo.id === todoId );
  return targetTodo.completed;
};

Enter fullscreen mode Exit fullscreen mode

What would it look like if we started using selectors like this in our test code, just like we would in our production code?

import reducer, {addTodo,completeTodo} from './todos';
import {isTodoCompleted} from './selectors';

describe('todos duck', () => {
  describe('completing todos', () => {
    it('initializes new todos as not completed', () => {
      const todoId = 'todo-a';

      const initialState = undefined;
      const nextState = reducer(
        initialState,
        addTodo({id:todoId,text:'blah'})
      );

      // 👇 using a selector to query our state
      expect(isTodoCompleted(nextState,todoId)).toBe(false);
    });

    it('allows us to mark an existing todo as completed', () => {
      const todoId = 'todo-a';

      let state = undefined;
      state = reducer(
        state,
        addTodo({id:todoId,text:'blah'})
      );
      state = reducer(
        state,
        completeTodo({id:todoId})
      );

      // 👇 using a selector to query our state
      expect(isTodoCompleted(state,todoId)).toBe(true);
    });
  });
});

Enter fullscreen mode Exit fullscreen mode

Now we're getting somewhere! These expectations are way easier to read, and more importantly they're a lot more resilient to changes in our production code. If we change the shape of our state in production code we'd necessarily have to update any associated selectors, which means that tests which are using those selectors would get the update "for free". No need to touch a single line of test code unless there's a material change in the external behavior of our redux code. This applies even if we were to make dramatic changes to the internal implementation of behavior that's under test. For example, we could move where we store the completed-ness of a todo to a totally different area of our redux state without needing to touch a single test. This is a big win, and in my opinion the biggest motivator for moving to a more integrated approach for testing redux reducers.

Testing selectors

We've just been looking at using selectors to verify the state produced by a reducer, but we can also do the inverse - use a reducer to set up state which we then use to verify a selector's behavior.

Here we have a new selector, getTodoCounts, which tells us how many todos we have that are complete vs incomplete:

export function getTodoCounts(state){
  const completedTodos = state.filter( (todo) => todo.completed );

  const total = state.length;
  const complete = completedTodos.length;
  const incomplete = total - complete;

  return {
    total,
    complete,
    incomplete
  };
}
Enter fullscreen mode Exit fullscreen mode

To test this selector we want to get our redux state into a shape that simulates having a few todos in our list, one of which is completed. We could create that simulated state by hand in our test, but this would approach would have many of the same drawback of verifying state by hand in our reducer tests - it would be verbose, hard to read, and would make the test implementation pretty fragile - changes to the shape of our redux state would require changes to these tests. So, instead of hand-crafting this state, let's use our reducer instead:

import reducer, {addTodo,completeTodo} from './todos';
import {getTodoCounts} from './selectors';

describe('todos selectors', () => {
  it('correctly counts todos in various states', () => {
    let state = undefined;
    state = reducer( state, addTodo({id:'todo-a',text:'blah'}) );
    state = reducer( state, addTodo({id:'todo-b',text:'blah'}) );
    state = reducer( state, addTodo({id:'todo-c',text:'blah'}) );
    state = reducer( state, completeTodo({id:'todo-b'}) );

    expect(getTodoCounts(state)).toEqual({
      total: 3,
      complete: 1,
      incomplete: 2
    });
  });
});

Enter fullscreen mode Exit fullscreen mode

This should be fairly self-explanatory. Starting from an empty initial state, we add three todos, complete one, and then verify that the counts that come out of our getTodoCounts match what just happened. This style of selector test has the same benefits that we saw with our reducer tests - the tests are more expressive and way less coupled to the internals of how we model the todo list state.

Couldn't we use snapshot testing?

Some folks might be wondering about Jest's snapshot testing features as another way to approach verifying the state that a reducer operation creates. Here's an example of how we could re-write the reducer tests we were looking at previously to use snapshots:

import reducer, {addTodo,completeTodo} from './todos';

describe('todos duck', () => {
  describe('completing todos', () => {
    it('initializes new todos as not completed', () => {
      const todoId = 'todo-a';

      const initialState = undefined;
      const nextState = reducer(
        initialState,
        addTodo({id:todoId,text:'blah'})
      );

      expect(nextState).toMatchSnapshot();
    });

    it('allows us to mark an existing todo as completed', () => {
      const todoId = 'todo-a';

      let state = undefined;
      state = reducer(
        state,
        addTodo({id:todoId,text:'blah'})
      );
      state = reducer(
        state,
        completeTodo({id:todoId})
      );

      expect(state).toMatchSnapshot();
    });
  });
});

Enter fullscreen mode Exit fullscreen mode

I'm not a fan of this approach for two main reasons: these tests no longer communicate expected behavior, and they are extremely brittle. While these tests are certainly succinct, I would not characterize them as expressive. Looking at this test, can we tell what the these addTodo and completeTodo actions are supposed to do to our redux state? We can guess based on the descriptions of the tests and the names of the actions themselves, but the test itself doesn't communicate anything other than "it should work the same way it did last time". And the flip side of this issue is that if anything at all changes in the shape of the state, these tests will start to fail. The tests are fragile in the same way that our original tests using toEqual were. And while it's easy to re-snapshot the tests with jest --updateSnapshot, this is really just deodorant over the underlying problem - an over-specified test.

Drawbacks to using selectors

We've seen that when we use selectors in our reducer tests we get tests which are more expressive, and less brittle. But what are some of the drawbacks? One concern is that as our tests pull in more production code, they become less focused. If we get a failure from a reducer tests which is using a selector, we don't initially know whether the issue is with the reducer we're testing, or the selectors that we're using to help us test. We'd have to do some digging to figure that out.

This is an example of the more general set of tradeoffs you'd see along the spectrum between focused, isolated tests and broader, integrated tests:

more isolated more integrated
focused: reason test failure is easier to locate broader: failing test could have several causes
artificial: tend to rely on test doubles, chance that these don't reflect reality realistic: real dependencies are used, tests are closer to production scenarios
brittle: changes in a dependency's code are more likely to require test changes supple: changes to dependencies are less likely to require changes to test code

The best way to work within these tradeoffs is to create a blended test portfolio, where various styles of testing are compensating for the drawbacks of other styles.

Moving beyond ducks

At the start of this article I mentioned another post, which argued for including redux actions in our reducer tests, and showed how that revealed the shape of a larger organizational unit in our redux code - the duck. I pointed out how action creators form a sort of public interface for the reducer, and that we should be testing the duck as an integrated unit.

I would now take that argument a step further. From our exploration of testing strategies, it is clear to me that selectors for the missing second half of a duck's public interface.

A duck module, where selectors provide a public query interface and actions a public command interface

This structure is quite reminiscent of a CQRS (Command Query Responsibility Segregation) architecture. Selectors are Queries, used to inspect the current state of our application, while Actions are Commands, used to effect changes to that state1.

While the original proposal for the duck module doesn't mention selectors, I believe it makes sense to expand the scope of the duck module to also include selectors. I'm not the first to point this out.

Summarizing our options

In this article we've used four different approaches to validating expecations of redux state.

We used Jest's toEqual matcher to do exact equality matching:

expect(state).toEqual([
  {
    id: todoId,
    text: 'blah',
    completed: true
  }
]);
Enter fullscreen mode Exit fullscreen mode

We used Jest's objectContaining matcher for more flexible, partial matching:

expect(state).toEqual([
  expect.objectContaining({
    id: todoId,
    completed: true
  })
]);
Enter fullscreen mode Exit fullscreen mode

We used selectors:

expect(isTodoCompleted(state,todoId)).toBe(true);

Enter fullscreen mode Exit fullscreen mode

And finally we tried out Jest's snapshot testing facility:

expect(state).toMatchSnapshot();
Enter fullscreen mode Exit fullscreen mode

Looking at these options side-by-side, it's even clearer that selectors are the most expressive option. We've also seen that selectors keeping our tests more flexible, and resilient to production code refactoring.

Next time you're writing a reducer test, I hope you've consider using selectors to validate redux state. And let me know how it works out! And follow me here or on twitter to get notified when I publish the 3rd post in this series, focused on more expressive ways to write higher-level, stateful unit tests for reducers.


  1. I find it quite interesting that redux, which hews closely to a functional programming style, ends up modeling Command/Query Seperation, which comes very much from an Object-Oriented perspective of software. My friend and former colleague Andrew Kiellor pointed this out to me. ↩

💖 💪 🙅 🚩
ph1
Pete Hodgson

Posted on August 9, 2019

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related