What Features do I Keep? - Design Tips for Software, Board Games, and Knives
nwdunlap17
Posted on September 23, 2019
The 'Everything' Machine
“Focusing is about saying 'No'.” -Steve Jobs
We’ve all been there: you’re creating something, maybe it’s a project for school, or a flash game, or a fancy new kitchen appliance. As a designer, it’s your job to answer the question “What can my creation do?”. And there’s a natural temptation among developers to answer that question with “Everything”. After all, more is better, right?
That instinct is a trap. Variety sounds nice in theory, but in practice it can cause quite a few problems. For every feature added to a product, the other features need to be altered to accommodate. A tool might have to be reshaped, software might have overly general scope, and a marketing campaign might become diluted. Even if you avoid some of these pitfalls, the extra development time/complexity is going to manifest as increased price. As a result, it’s generally more efficient to make multiple focused products rather than one product that covers a wide range of features.
Here’s a real-world example. Off the top of your head, what tool you would use to do the following tasks?
- Insert a screw into IKEA furniture.
- Open a beer bottle.
- Cut a shape out of some construction paper.
I’m guessing that your answers were a screwdriver, a bottle opener, and a pair of scissors. You could have answered ‘Swiss Army knife’ for every question, but it’s pretty clear why that wouldn’t be a good first choice. Each tool has to be shrunken down in order to cram into the knife with all the others. That makes it difficult to rotate the T-shaped screwdriver, get grip with the thin bottle opener, or cut quickly with the short scissors.
The Swiss Army knife is actually an interesting example, because it gets a benefit in return for its compromises. Its small size and weight make it far more portable than full sized tools. You would never use a Swiss Army knife if another tool was available, but because it fits in your pocket, it always gives you a serviceable option. In this way, it’s not competing with ‘proper’ tools, but targeting its very own niche!
The traditional Swiss Army knife was designed with a certain amount of restraint, as versatile as they are, they usually have less than ten functions each. Let’s imagine what would happen if the manufacturers ran wild and added every feature they could think of. There are so many tools out there! What if they added pliers? A magnifying glass? A wrench? What about a saw? Sure, throw ‘em all in there! Pretty soon you’ll have something like this!
That monstrosity costs over four hundred dollars. And many of those tools are unusable. A magnifying glass that small is pointless, and can you imagine anything you could cut with that tiny saw? Worse, it’s gotten so wide that it will hardly fit in a pocket anymore. At this point, any customer is better off just buying a tool belt.
This thing is suffering from a bad case of Feature Creep. Feature Creep is the tendency for designers to keep adding more and more features to a product, well past the point it stops making sense. In a physical product like the one above, it’s pretty easy for any outsider to see that things have gotten out of hand. But when designing software, that bad habit becomes significantly harder to spot.
Build a solid Foundation to prevent Feature Creep
How do you know whether a feature is going to improve or bog down your design? To answer this question, you need to have a firm understanding of what your product’s use case is. Before beginning a project, you should answer the following questions as thoroughly as possible.
- Who is this product for?
- When will people use this product?
- What do I want people to feel while using this product?
- How will this product be differentiated from similar products that already exist?
It’s important to note that the answers to these questions are not features, they’re probably not even user stories. You shouldn’t say that ‘My product will have features X, Y, and Z’, because you may find in prototyping those features don’t work as well as you thought. Instead, you should say ‘My product will help consumers solve problem X”. That way, the driving focus for your product is explicit, and you/your team have a metric to compare different implementations.
Once you’ve answered these questions, write them down and treat them as gospel. Whenever you need to make a design decision go back to your answers and ask yourself “Do I need to have this feature in order to justify my answer?” And you should only implement that decision if the answer is a resounding yes. These foundational questions also provide a benchmark for who close to complete your project is. After all, you’re not done until you can check off every part of your foundation.
But how can you decide the best method for accomplishing the goals you’ve set out for yourself? How do you know whether feature A or feature B is a better fit for your product?
Customer Feedback
There’s only one way to understand what user experience you’re creating, and that’s to put your product in front of users. Frequent cycles of prototyping and testing are necessary to ensure that you’re putting out the best product possible. There are many ways to gather customer feedback, depending on the resources that you have available.
Many mobile app companies randomly send different updates out to users, and then look at data to compare how users are responding to each version of the update. Monitoring how much time users spend interacting with your product or which features they use most frequently is a great set of data as well if you can implement systems to track it. But the most basic method is tried and true: simply asking users questions about their experience.
“If I had asked the people what they wanted, they would have said faster horses”
– Henry Ford
The bad news is that that’s much easier said than done. There are many hurdles to getting useful feedback from users. Unfortunately, when asked ‘How did you like the product?’, most people don’t automatically respond with well thought out criticism and relevant feedback. If you want actionable information, you’re going to have to prod your customers in the right direction with leading questions.
As a general rule, you should try to keep your questions as specific as possible. You want to ask questions that will force the user to think about their experience and provide critical and specific feedback (with examples and anecdotes if possible). Each question should be directed and have a defined purpose. Also, you probably shouldn’t ask the same set of questions each testing cycle, focus on refining a few features at a time.
When I run playtests for tabletop or board games, the questions I ask tend to be the following:
- (After explaining the premise but before play) What are you most excited to do or try? – This helps me understand how well my pitch/promotional material is setting expectations. If they are expecting something totally off base, I need to improve the clarity of my writing.
- What was the most confusing or frustrating part? – This tends to be the first question I ask after a playtest. It forces the player to think through the session, and whatever they say will be a form of criticism. I find that a lot of groups are a little hesitant to bring up the negative parts of their experience. This question is good at setting the tone of the discussion and getting everyone in the right mindset.
- Why did you choose option X over option Y? – I want players to be making choices during the experience, but I want to make sure the reasons behind those choices are more along the lines of ‘X is what made sense at the time’ instead of ‘X is obviously better than Y’.
- What was the most interesting or memorable thing another player did? - I have some mechanics in my game that encourage players to pay attention to and support one another. I don’t really care about the specifics of their answer here. I’m actually just checking that their answer isn’t ‘I don’t know’, because that would mean my mechanics are failing.
Interpreting the Data
We’ve collected all of our feedback for the testing cycle. We’ve asked our questions, written down all their answers, and maybe even have some nice objective data recorded automatically. So now all we have to do is just implement the fixes the customers suggested. Easy, right?
I wish.
It turns out that the truth is often quite obscured in feedback. Customers can fail to notice key features, misattribute perceived problems, and make incorrect assumptions. Many people are quick to jump straight to features in their feedback, detailing ‘solutions’ that either miss the mark or don’t fit your design goals. Even seemingly objective data can be mired by survivor bias or affected by unseen factors. There are a lot of stories out there about customers complaining about a product for the wrong reasons.
Here’s a famous anecdote from the development of 2003’s Wolfenstein: Enemy Territory. Wolfenstein was a standard WWII shooter, with multiplayer pitting Axis and Allies teams against one another. During playtesting, a massive number of players found the Allies’ Thompson submachine gun vastly superior to its Axis counterpart, the MP40. This greatly surprised the development team, because the two guns had identical stats. The only difference between the two guns were model and sound effect, yet players consistently felt the two guns were very different. “The general consensus was that if you had a Thompson, it was slower but harder hitting, and the MP40 was faster, but it was weaker” recounted one developer. Even more surprising to the team was that the players were right. When they looked at the game stats, the Thompson scored a much higher ratio of kills than the supposedly identical MP40. After a thoroughly confused examination, the team came to the conclusion that the difference was due to the sound effects produced by the two guns. The Thompson’s gunshot had significantly more bass than the MP 40’s. It sounded more powerful, players felt more confident, and that confidence led to them attempting headshots at a higher rate than players with the MP40. The ‘fix’ for this issue only involved reducing the bass on the Thompson’s shots. Afterward, players felt that the gun had been ‘brought down’ to the proper power level.
It’s important to properly understand the moral of this story. Even though there technically wasn’t a problem, customers perceived one, and believed it so strongly that it actually warped the collected data. This invented problem still made customers unhappy, and it would have driven them away if left unchecked. In that sense, the perceived problem was just as real as any other.
So, when you have your data, and you notice a strong trend, don’t just take it at face value. Investigate, iterate, and test possible solutions, because the real problem might be something everyone has taken for granted.
Posted on September 23, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 29, 2024