Stop Guessing and Validate What Your Customers Want

How to avoid confirmation bias traps in product development

In agile, everything we do is an experiment. Agile product development is no different. We think we know what the customer wants, and the customer thinks they know what they want, but it turns out we’re all wrong!

How can that be? Our confirmation bias lures us into one of two traps as a result of false positives. Here’s how to identify and avoid these traps and build products customers actually want to use.

The “Customer knows all” trap:

Customer interaction is a great way to get feedback, but when that interaction isn’t targeted to the right customer segment, it’s often not helpful. Imagine online product reviews, survey feedback, and service desk calls and what they have in common: extremes. Customers generally use these avenues if they really LOVE or really HATE something.

While direct customer feedback is helpful, we often look at it in a vacuum. When we do not consider the feedback in these reviews is often extreme, we probably do the wrong thing for the other customers who frankly just don’t care enough to comment. Yes, our lovers and haters make a lot of noise, but they are usually outliers on the bell curve and are the minority of our overall customer base. It’s hard to ignore this direct customer feedback but if we listen exclusively to it, we’re not focusing on the right thing for our customers in the middle of the bell curve. The majority of customers who are often more apathetic, but contribute a larger share to our bottom line, need a louder voice in our product decision focus. Customer feedback needs to be tested multiple times with multiple types of customers.

The “But I proved it” trap:

A minimum viable product (MVP) is centered around a hypothesis. What is a hypothesis? Well, if we think back 7th grade science, a hypothesis is: “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation” (Google Dictionary). Thinking back to the scientific method, we are trying to prove a hypothesis false. Even if becomes a theory, that’s still only because it has never been proven false. In other words, we can’t prove a hypothesis.

We need to think again when testing our hypotheses. We need to stop using the language of trying to “prove” them. Re-read the above: they cannot be proven. This trap usually yields to confirmation bias; we find evidence that supports our hypothesis, so it must be the right direction – it must be true! When we let this confirmation bias hijack our hypothesis-driven experiments, we stop testing too early. Then we potentially miss out on information that could instead disprove our hypothesis around the next bend (and in turn prevent us from making a bad product decision).

Here’s an example: When I was working for a large retail company, new product ideas often came from snooping on other retail websites to see what competitors are doing (yep, that’s the research). In this instance the hypothesis was, “If we offer a gift packaging service for our customers who order online, we will better compete with <insert competitor here> because they are doing it.” The “test” was basically copying someone else in hopes that we could compete. Even though another company was offering the same service, that didn’t mean it was a winning idea for us or our customers. Unfortunately, when we see one example, we often gravitate toward that and ignore the data that does not support our hypothesis; the definition of confirmation bias. In this case, while our customers did use the gift packaging service, it did not directly impact our competition with said competitor. In fact, our return on investment was very poor compared to the cost that went into developing it. We didn’t do enough research.

I’m not advising to never believe any feedback. Rather, I’m advising to get enough feedback to validate make an informed decision. What is enough feedback? It’s different for everyone, but make sure that it’s more than one data point and is statistically significant if possible, the metric is defined explicitly, other changes are explored, and you’re not confusing correlation with causation. Leading and lagging metric indicators are often missed or we don’t see the less obvious but equally valuable trend. In other words, do your homework and don’t just implement something because someone had an idea and some small data set matched it. While it IS exciting to get feedback that agrees with us, if we stop testing too quickly and assume we’ve proved the absolute correct direction after limited positive outcomes, we’ve done ourselves and our customers a disservice.

I would love to hear your thoughts. I am hosting a workshop on this topic at Devops West  Thursday, June 7, 2018.  It would be great if you could join us there.


Natalie Warnert
As a developer turned Agile consultant, Natalie Warnert deeply understands and embraces the talent and…

Comments

Modern Software Factory Hub

Your source for the tips, tools and insights to power your digital transformation.
Read more >
RECOMMENDED
How Regulations Will Impact AI InnovationHow an Internal Incubator Will Optimize Your Dev PracticesWhat You Need to Know About the Serverless Future