Gathering customer insights, part five: measuring feedback
How good are you at getting feedback and doing something with it?
Measuring customer feedback
Welcome to another installment of this series of articles about gathering feedback and hearing the voices of your customers.
Part one explores the pitfalls of talking AT your customers. You visit customers, talk at them incessantly, never pause to listen, rarely ask questions, fear their feedback, and are blindsided when you get it.
Part two delves into the hubris of expertise and why we sometimes listen to and then choose to ignore our customers.
Part three introduces a product manager who falls victim to learning from customers but does not validate the market applicability of their requirements.
Part four explores what happens when you fall prey to negativity bias and outlines tactics to recognize when you are improving your product versus resolving complaints.
This article is about combining the right data and insights from customer feedback to help you answer your most important product questions.
Ask yourself:
How effective am I at gathering feedback?
Are the insights and data I am collecting resulting in action and outcomes?
The mechanics of gathering feedback
Gathering customer feedback should be a deliberate, structured activity designed ahead of time with specific outcomes in mind. Customer feedback is crucial data for understanding the health of your product and the effectiveness of your product organization.
It is essential to gather feedback through a variety of channels.
Product analytics
Product analytics tools are widely available, easy to implement, and provide excellent quantitative data about adoption and utilization. Given the simplicity of automating collecting usage data, in-product analytics is a surprisingly underutilized tool. If you have this in your ‘we’ll get to this when we have a chance’ bucket, put this at the top of your priority list.
These analytics help you figure out where to make improvements in your product. The less obvious outcome is arming you with the confidence to do less and remove a lightly used feature. It is easy and tempting to pack a product with features, but that leads to really complex software that is harder to use and even harder to maintain.
Begin each strategic planning cycle by asking, “How can we do less?” Use product analytics to prove less is possible.
User interviews and design research
You’ll need the usage data to correlate to your customer interview feedback. To achieve maximum effectiveness, design interviews with increasingly specific questions. “What are your current priorities?” “What are you most worried about?” “How do you accomplish this part of your job?” “What is blocking you?” “Here’s a feature we are thinking about implementing. Would this help?”
These example questions are designed for requirements validation and market applicability research. Include user design research–what’s the best way to implement a feature that will drive adoption and utilization? Does the feature need just an API or an API and UI?
Speculative experimentation
Speculative experimentation helps validate higher-level hypotheses or longer-term product strategy. What if we take our product in a new direction or add an adjacent feature set? What if we break the product apart, repackage, and refine the target personas for our ideal customer profile? What if we went after an entirely new market or a different ideal customer profile?
Designing conversations for customer feedback
Take an inventory of your last five customer meetings. What was the agenda for each? Were you there to inform or learn? Did you design the conversation in advance with an experiment in mind? How structured and consistent are the questions you asked of each?
Here are some guidelines I’ve used to gather feedback effectively.
Own the discussion
Sales, customer success, and implementation partners often ask product managers to meet with customers. By all means, take those conversations, but consider them tactical and not specifically part of your customer feedback process. Sure, you might learn something important, but consider these opportunities as a means to someone else’s end. There’s a different path to your strategic objective.
When gathering feedback, proactively schedule your customer meetings, set the topics and agenda, specify your participants, and run the session with a partner scribe. These meetings are for strategy-oriented discussions, not reacting to customer problems or supporting sales initiatives.
Be consistent
When you have an experiment in mind, you need to gather consistent feedback that you can compare and analyze. This means you need to ask the same questions, in the same order, to the same personas across these customer meetings.
Breadth comes from asking many customers the same questions; depth comes from coaxing details out of them.
Don’t sell
Customer feedback gathering isn’t about selling. You aren’t trying to convince the customer of anything at this point. Consider feedback gathering as a learning exercise. Structure the conversation to ask questions and then shut up and listen. Seek clarification wherever possible.
Spread the knowledge
Conduct weekly voice of the customer readouts where you share learnings across the product organization. Give team members the chance to ask questions about the methodology and the feedback it produced. This practice ensures you stay focused on communicating what you are trying to learn while formalizing the channel through which the entire product organization learns about customer needs.
And, since you set an OKR for a voice of the customer interview per week (you set this ORK, right?), you’ll always have something to report. These are the most effective meetings the team will have each week.
Bring together users on advisory boards
I am a big fan of advisory boards. After all those 1:1 conversations, I find it useful to bring groups of customers together at least once per year. The most important part of advisory boards is customers sitting together and talking (hopefully in front of you) and sharing information and experiences. Advisory boards provide an opportunity to conduct experiments and gather feedback in a concentrated manner.
Live comfortably in the fuzzy front end
Product managers live in the “fuzzy front end” of software development, an ambiguous realm of creativity, experimentation, and strategizing, a place that resists the easy measurements the DORA metrics apply to your software delivery process. [In this case, ‘easy’ serves as a synonym for ‘unambiguous,’ an important distinction so we don’t spread the misconception that measuring software development is, in fact, actually easy. It’s not.]
Feedback effectiveness
The good news is that we’ve established a way to measure the design phase of your product’s lifecycle, giving us a path to answering the first question: How effective am I at gathering feedback?
First, let’s decide what effective means.
In this phase, a product manager’s job is identifying and validating requirements, designing a solution, experimenting, and building a work-ready backlog. At this stage, you are probably thinking about go-to-market needs like pricing and packaging, routes to market, positioning, and partnering strategies. (Amazon famously packages all of this into its ‘Working Backwards’ methodology, and whether to use these techniques is a worthy path of investigation.)
A good measure of effectiveness is how quickly work-ready stories arrive in the backlog. (We discussed how to measure product design lead time in the series on metrics.) Once your voice of the customer cadence is in full swing, this should be less than a week, and as the program matures, less than a day.
But, suppose you are trying to decide between a long-term refactoring project that will significantly improve a product's performance and stability, a new feature to help close a gap with a key competitor, or a foray into a new product area for an adjacent market. Deciding across projects like these is more significant than making and prioritizing a backlog.
Sequencing differs from prioritization because it’s important to understand each project’s breadth of applicability to your customers to decide which project to take on first.
Data to support choosing the refactoring project exists in the frequency and distribution of requests made to customer success. What you need is insights to put context around those customer support numbers. Structure your interview questions to gather desire, impact, and applicability on a 1-5 scale, then combine that with your support data.
For your new feature, gather market penetration and market share data. How far behind are you? What information do you need about the feature for it to drive new product adoption effectively?
A prototype, open-ended behavioral questions (e.g., here’s a page with a new feature, what would you do?), and a quantitative question to measure product impact (e.g., here’s 5 product journeys, rank them by usability and importance) will give you clues to the impact of tackling this new feature first.
Deciding to enter an adjacent market means understanding the market’s size and determining the product’s applicability. Does the product need significant new features, or can you attack the opportunity by making packaging changes or bundling products together?
Interview questions should start broad and increasingly narrow to a point that helps you understand basic requirements, ideal customer profiles, and personas. What you are trying to do with the answers is build a ‘likelihood to succeed’ number around your revenue forecasts and cost expectations.
Actions and outcomes
Our second question: Does the feedback we gather result in outcomes and action?
It may seem counterintuitive, but the main objective of feedback gathering is to build less product. A product with fewer features is simpler to use and easier to maintain. Every line of code that makes it to production is another line of code that could break, have a security vulnerability, degrade the performance and stability of your product, or significantly increase your delivery costs.
We often build a new feature because a customer asks for it or a competitor we are worried about already does it. Do you need that feature to deliver better value to customers or create unique differentiation against that competitor? Maybe. Or it could be that your competitor built that feature, and none of their customers actually use it. Your customer thinks she needs that feature–maybe because your competitor described it to her–but it doesn’t solve her primary problem. If you build it, she might never use it.
My budgets have always been tight, and I’ve always been short on people to execute on a giant backlog. To the dismay of many colleagues, I’ve deleted two-thirds of an existing backlog on several occasions as my first step in formulating a product strategy. The rationale is crude but effective: your customers don't need it if you haven’t delivered a seemingly critical feature in over six months. Get rid of distractions and focus your team on what is important.
Or, you may be falling way behind on delivery, your utilization rates are dropping, and attrition rates are rising. In that case, focus the team on delivering efficiently and look for bottlenecks. Are customers rejecting features because of poor designs? Does it take product managers too long to get actionable feedback and build a work-ready backlog?
Use feedback gathering to better understand how to improve your product’s time-to-value, actions to take to improve the productivity and efficiency of the product organization, and build a shared understanding of what it looks like to deliver value to your customers.
Some final thoughts
What you are trying to learn and the learning process is different for every product. Give the process sufficient time to unfold, and you’ll find the right balance between qualitative and quantitative feedback, know the optimal sample size to avoid doing twenty customer interviews when eight will do, and develop an understanding of what good looks like for your product lead time metric.
Good luck!



