A number of our clients have approached us recently to ask about using Net Promoter Score (NPS) as the scoring methodology for their customer satisfaction surveys. Much of the hype around the NPS scoring method is attributable to Fred Reichheld’s book titled “The Ultimate Question” where Professor Reichheld presents the Net Promoter Scoring method as the single performance metric to understanding customers and predicting financial success.
(Please note that there is a difference between using the Net Promoter Score as one of the measures of customer feedback and using NPS as the only measure of customer feedback, as espoused in Reichheld’s book. I’ll abbreviate the latter usage as USM—ultimate score method.)
At Mindshare, we agree with Professor Reichheld’s view that the single most important customer measurement is generally the strength of emotional commitment required to “recommend” a service or product to family or friends. Loyalty is dependent upon both the head and the heart being involved with a company and its products and services. So using “likelihood to recommend” along with an aggressive scoring method like NPS or Top-Box as an overall measurement or an individual store’s goal is a worthwhile endeavor. Rallying a company’s culture around a single customer measurement is also extremely beneficial, as long as robust accountability is put in place.
We also recognize that the “best” measure of a customer’s loyalty is his actual behavior—did he buy more? It would be best to measure “share of wallet” or “retention” or “repurchase.” But these can be very difficult to acquire and measure. Our experience has been that the Recommend Question as a proxy for these is very predictive of future financial performance. But we also recognize that many customers will not recommend anything to anyone, regardless of how happy they are. Thus, we prefer to use a composite measurement, the average of: “likely to recommend,” “overall satisfaction,” and “likely to return,” if possible. Composite measurements are more stable and less likely to be influenced by temporary anomalies.
We further believe that in many situations (particularly those in which individual unit scores are bunching together without discernible differentiation) the use of more aggressive scoring methods (such as Top-Box or NPS) provides a way to quickly separate “the men from the boys” in terms of differentiating true “evangelist customers” from the merely “satisfied.” It has been shown over and over that loyalty rises exponentially, not linearly, across increasing scores of respondents (i.e., a “5” is exponentially more loyal a customer than a “4”). So, about 50% of our clients use aggressive scoring methods like NPS or Top-Box to help spread out the performance distribution of their units.
However, to assume that a single question can be used to help a business diagnose and improve their operations is simply unfounded. Even Reichheld himself tries to temper his recommendation by suggesting potential additional questions that ask the customer to explain “why” they feel the way they do. This is akin to telling a doctor you feel ill, having him ask you “why” and then allowing no further questions. Common sense tells us that a single question methodology cannot possibly be robust enough to provide the actionable information required to improve! Also, by throwing away data, we are unable to evaluate the “shape” of the distribution of answers. For example, how many “near-misses” are there? How many “very poor” experiences are there? To quote Reichheld himself, “To be actionable, customer feedback needs to relate specific problems to specific groups of customers.” (I ask, how can one know “specific problems” by asking just one question?)
Here are some additional issues that I think Reichheld and others miss when they try to over-simplify:
Surely the allure and simplicity of asking only a single question to determine customer satisfaction is appealing. On the surface it sounds so inviting. It did to us. But when we tried to use Reichheld’s Ultimate Question in practice, we discovered its major utility flaw—a lack of actionable information that our clients can actually use to improve their business.
H.L. Mencken had it right when he said,
Professor Reichheld’s primary antagonist in his book The Ultimate Question appears to be “The Traditional Customer Satisfaction Survey” implemented by a “Market Research Vendor.” He appears to be assuming that anyone involved in customer experience measurement uses horrendously long surveys, each tainted with a complex Market Research bent. The problem as I see it is that he hasn’t allowed for any other optional approaches. Thus, he fights against a straw man of his own creation. But there are companies that don’t fit his straw-man mold. One of them is Mindshare Technologies.
At Mindshare, we focus on Operations Improvement through Customer Involvement—tactical diagnosis of issues that are causing the customer experience to be less than stellar. We prefer short surveys. We use the “recommend” question along with others, and we utilize NPS as one of several aggressive scoring methodologies. We believe customer feedback should be taken as close to the experience as possible, with real-time results, and with as many respondents as possible. We strongly believe in the basic concepts of promoters and detractors. We believe promoters (or advocates) are significantly more profitable than neutrals or detractors. We believe detractors should be followed up with and local branches should be held accountable for service-lapse recovery. We believe strongly in the balanced scorecard that enthrones both customer and employee measurements as co-equals to short-term profit metrics.
So what is the “right” way to measure the customer experience? There is no single best method. However, here are our recommendations:
Note: Scoring methods do not affect the original data collected; they are just different ways of presenting the information. Generally, the more aggressive the method, the lower the score.
If you’ll indulge me, let me be perfectly frank: Using the ultimate score methodology would probably make Mindshare more revenue, more profits, and creates fewer headaches! Why? Because the surveys are so much shorter, the response from customers is often greater, the cost of the surveys is lower, there are fewer “moving parts,” and, hence, less problems for Mindshare. It is absolutely in our best short-term interest as a company to solely promote the ultimate score methodology as the only method for measuring customer feedback. But we don’t—because it won’t hold up.
At Mindshare we evaluate and report on over 70,000 surveys per day for clients in over 25 service industries. Our objective is to provide our clients with real-time, actionable information that will help them to not simply measure customer satisfaction, but to have clear guidance on how to diagnose and improve customer satisfaction. Put simply, we strive to deliver—
We believe that theory is useful, but not always sustainable, in actual application. As support for our position, we present, in the appendix, excerpts from a sample of multiple academic and consulting sources. The following authors are primarily engaged in doing surveys, rather than writing about them. None of these authors have any tie—official or unofficial—in any way to Mindshare.
This is how I see it.
by John H. Fleming, co-author of “Manage Your Human Sigma” (HBR, July 2005)
Reichheld’s claim has provoked a great deal of interest and debate, and the utility of the entire construct has recently been called into question (Thurm, 2006). Nonetheless, it has struck a chord, primarily because it takes something that’s been considered complex and makes it astonishingly simple.
The existence of a simple, single-item performance metric that can be reliably linked to positive financial performance would be the management equivalent of a cure for the common cold.
However, as noted previously, single-item measures are inherently less reliable, and some advocates are more valuable to your company than others. And there are other reasons to be wary of a single-item approach. Among the most important reasons is that a single-item advocacy metric doesn’t tell you why customers recommend a company. As a result, it doesn’t give you the intelligence you need to manage customer touch points to increase the number of these advocates. Measuring advocacy is one thing, but to manage your customer relationships effectively, you need to know more.
Knowing your company’s NPS may prove a useful and important piece of business intelligence, but it doesn’t tell the whole story. And missing out on the rest of the story can prevent your business from harnessing the true power of customer relationships that drive financial performance.
Taken from: businessjournal.gallup.com
by Fred Van Bennekom, Dr.B.A.
The December 2003 Harvard Business Review article, “The One Number You Need to Grow,” by Frederick Reichheld is one of those articles with “legs.” A title like that should make anyone skeptical, and with no disrespect to Mr. Reichheld, the title of his article, while snazzy, doesn’t do justice to the content of his research and may lead readers to the wrong conclusion. The article has been misinterpreted as “The One Number You Need to Know.” In fact, knowledge of more than one number is needed to grow a business. A robust customer feedback program is needed.
From personal experience, I can state definitively that “willingness to recommend” as a sole survey question has a hole the size of a Mack truck. Many people, who are thrilled are not willing to make a recommendation or serve as a reference.
Why? Because their companies won’t allow it. Also, serving as a reference is work for the referrer, and the bond has to be incredibly strong for the customer to take on that burden.
To the contrary, knowledge of a customer’s willingness to recommend—alone—is not actionable survey data.
What if I get low scores from a number of people? What would I do? I have no idea! Why? Because the one-question survey instrument design provides no information on what action to take.
To grow a business, you need to engage a customer feedback program that will predict at a macro level the course of your business. At a micro level, the feedback program must isolate the causes of customer dissatisfaction—and satisfaction. This information is vital to recovering at-risk customers and to performing root-cause identification and resolution. It’s the improved business design and operational execution that leads to business growth.
Taken from: greatbrook.com
by Doug Grisaffe, assistant professor of marketing, University of Texas, Arlington
Actually, there are several critical logical, conceptual and statistical problems with Reichheld’s proposition.
Perhaps Reichheld’s motivation was good, but his methodology was flawed. Perhaps he chose recommendation through a classic statistical mistake—interpreting correlation as if it implies causation. In a statistical search through survey items to see which correlated most with certain business measures, he concluded recommendation usually was best. . . .
A strong correlation existed between net-promoter figures and a company’s average growth rate. . . . Remarkably, this one simple statistic seemed to explain the relative growth rates. . . . p. 51.
Clearly, he is inferring causation, advocating management of the net promoter score as a means to realize enhanced business performance.
Those trained in scientific methods will instantly recognize the significant difference between the following two “path models”:
Recommendation is an effect. It is fine to monitor it as a sign of business health. However, to control upward movement, we’ve got to manage the underlying cause.
It may be motivating and clear-cut, but it is far from actionable. What specifically should a company improve, among all possible things it could improve, to move the score upward? Good luck to the manager left to speculate about what specific actions need to be taken.
Taken from: walkerinfo.com
All excerpts are from freely available websites listed at the end of each excerpt. Please visit the websites to see the complete articles. (No fees, registrations, or log-ins were required.)
The reader may also want to perform an Internet search for additional information, as the articles referenced in the appendix are just a few of many opinions on both sides of the issue.