Asking customers what you want to hear
There is nothing more insightful than learning how to improve your business directly from your customers. The challenge is asking customers the right questions.
Qualitative research data is only as good as the questions asked. Bad questions mean bad data, no matter how helpful and enthusiastic your customers are in response.
Many of our customers use Intercom to ask their users questions about their product, often trying to understand why people do or do not use an existing feature, and whether they would use an upcoming feature. Here are a few tips for ensuring that the questions being asked are great questions, and that the responses can be used to develop features that will actually be used.
Bad questions encourage prediction of future usage
Research study after research study has shown that people are very bad at predicting their future behaviour and attitudes. Therefore one of the worst, but sadly most common, research questions to ask is: “Would you use feature ‘x’ if we built it?”. In interpreting the question, many biases form the response. For example, people interpret that by suggesting it, you think building the feature is a good idea, so they fall victim to authority bias and a little social proof and tell you that they would definitely use the feature. A second problem is that people state preferences and opinions about something simply because they were asked, whereas without being asked they would never have thought about, nor needed the feature. This is called the query effect. People are incredible storytellers, and can create detailed accounts of things that don’t actually matter to them when they are asked about them.
Bad research questions also encourage generalising usage. For example, asking: “How do you normally do ‘x’?” This results in an idealised, rationalised account of a workflow, and misses many of the details that made the workflow difficult.
Good questions focus on actual, recent usage
A better way to discover whether a feature idea would be useful, is to ask about specific recent usage. For example, “The last time you used feature ‘x’, what were you trying to do?” This is a good question for three reasons:
First, it is grounded in actual usage, rather than predicted future use. This is information about something that actually happened. The responses you get will include the context around the job, and the many factors we would overlook if predicting future use.
Second, it is focused not on the feature, but on the job the user is trying to get done. Often people use features as workarounds, because the right feature for them doesn’t exist yet. By focusing on the job people are trying to do, you often learn that improving the existing feature is a much worse thing to do than building a different feature entirely to support that job.
Third, it corrects for problems with people’s memory. The human brain is relatively poor at remembering events. In particular, the sequences of events, and the details of events. In many cases, when our brain can’t remember the details, it makes things up to fill in the gaps! This happens subconsciously, and we therefore don’t even know how inaccurate our memory of something is. As a rule of thumb, our memory is pretty accurate within 24 hours, reasonably accurate up to a week or so, but degrades badly beyond that. Therefore it is important to filter your question down to people who have used the feature recently, i.e. only ask people who have used a specific feature in the last 2 or 3 days. You can use custom data in Intercom to do this.
When a feature doesn’t yet exist, you can ask about the job itself. For example, “The last time you did ‘Job A’, how did you do it?” A great follow-up is to ask them to walk you through the steps they took, and where things worked well and not so well. Again this should have happened recently. The longer the gap between the activity and the question being asked, the less reliable people’s memory, the more we start to subconsciously fill in imaginary details, the less useful the data.
Don’t accidentally hear what you want to
A final tip is in how you analyse the responses you get. People have a strong tendency to interpret new information in a way that makes it consistent with what they already think. This is called confirmation bias. We filter out information that contradicts with our existing beliefs so we all have a natural and very strong bias to reduce the importance of negative feedback on our product ideas. This is a natural bias that we are all subject to that needs to be managed. So be hard on yourself to over-correct for confirmation bias.
Do quick and simple research
It’s incredibly powerful to have tools that enable you to ask questions to your user base, and receive responses in minutes. We’ve all been in product meetings, endlessly debating features with your colleagues. Now you can jump on customer feedback tools like Intercom while the meeting is ongoing, and gain insight into what your users actually need within minutes. The trick is to ensure that the questions you ask are the right ones.