The One Number You Need to Know: (Actually There’s More Than One)

The December 2003 Harvard Business Review article, “The One Number You Need to Grow,” by Frederick Reichheld is one of those articles with “legs.” (The article’s title is sometimes abbreviated to “The One Number to Grow”. A more in-depth treatment is found in his book, The Ultimate Question.)  More than a decade after its publication colleagues still ask me about it, professional associations refer to its “net promoter score” (NPS), and students cite it in their papers.

A title like that should make anyone skeptical, and with no disrespect to Mr. Reichheld, the title of his article, while snazzy, doesn’t do justice to the content of his research and may lead readers to the wrong conclusion. The article has been misinterpreted as “The One Number You Need to Know.” (A colleague of mine actually made that mistake unintentionally in a blog post of his, since corrected.) In fact, knowledge of more than one number is needed to grow a business. A robust customer feedback program is needed.

The article opens with Reichheld hearing the CEO of Enterprise Rent-A-Car, Andy Taylor, talk about his company’s “way to measure and manage customer loyalty without the complexity of traditional customer surveys.” Enterprise uses a two-question survey instrument; the two questions are:

  1. What was the quality of their rental experience, and
  2. Would they rent again from Enterprise.

This approach was simple and quick, and we can infer from other comments in the article the survey process had a high response rate — though none is stated. Enterprise also ranked (sic) its branch offices solely using the percentage of customers who rated their experience using the highest rating option. (Again, we don’t know the number of response options on the interval rating scale. I’ll guess it was a 1-to-5 scale and not a 1-to-10 scale.) Why this approach?  Promoting branches to satisfy customers to the point where they would give top ratings was a “key driver of profitable growth” since those people had a high likelihood of repeat business and of recommendations.

Reichheld, thus intrigued, pursued a research agenda to see if this experience could be generalized across industries. His study found “that a single survey question can, in fact, serve as a useful predictor of growth.” The question: “willingness to recommend a product or service to someone else.” The scores on this question “correlated directly with differences in growth rates among competitors.” (my emphasis) This “evangelic customer loyalty is clearly one of the most important drivers of growth.”

From personal experience, I can state definitively that “willingness to recommend” as a sole survey question has a hole the size of a Mack truck. At the end of my Survey Design Workshops, not surprisingly, I survey my attendees. (I try not to imitate the story of cobbler and his barefoot children.) Many people, who are thrilled with the survey training class, are not willing to make a recommendation or serve as a reference. Why? Because their companies won’t allow it. Also, serving as a reference is work for the referrer, and the bond has to be incredible strong for the customer to take on that burden. In my survey training classes when discussing the use of attitudinal questions in a questionnaire design to summarize the respondents’ feelings, such as referenceability questions, I ask about this phenomenon. It’s quite common in a business-to-business environment, though it’s much less common in a consumer product environment.

Thus, the survey question written for willingness to recommend must be phrased correctly, that is, in a hypothetical sense, not as a request for some action.  For example,

If a colleague or friend should ask you for a recommendation on a <insert product or service>, how likely would you be to recommend us?

However, that’s not the question that Reichheld used in his study.  His question was:

“How likely is it that you would recommend [company X] to a friend or colleague?”

Reichheld noted late in the article that the recommendation question did not work well in certain industries, and the reasons discussed here are probably why.  But these issues are probably evident in all industries to some extent.

Reichheld then discusses customer retention rates and customer satisfaction scores as adequate predictors of profitability, but not of growth.  He correctly notes that many customers are retained by a company because they’re captive to high costs of switching to another product. Thus, a likelihood of repurchase survey question may mask underlying operational problems since dissatisfied folks might still be retained — but they certainly wouldn’t recommend.  However, I’ll guess an unhappy, but retained captive customer also has a low likelihood of completing any survey invitation.  More importantly, if you’re not retaining customers it’s awfully tough to grow!  So, measuring customer retention — and fixing identified core problems — is one element in a growth strategy.

He cites one of the Big Three car manufacturers not understanding why their customer satisfaction scores didn’t correlate to profits or growth. The reason is that these surveys are overtly manipulated by the car dealers and especially their salespeople. Remember the last time you bought a new car? The salesperson probably handed you a photocopy of the JD Power’s survey you’d be getting (with all the high scores checked off).  He explained to you that high scores would lead to an extra bonus payment for him — and those kid’s braces are expensive. New car surveys are perhaps the most egregious example of poorly conducted surveys.  Thus, it’s very tenuous to draw conclusions about the “most sophisticated satisfaction measurement systems” from that most unsophisticated example. In this regard, Reichheld is guilty of the same error as the car manufacturer who drew conclusions from poorly collected data.

With all this evidence, Reichheld advocates a “new approach to customer surveys.” A one-question survey “can actually put customer survey results to use and focus employees on the task of stimulating growth.” (my emphasis) His main conclusion is that a simple survey focused on willingness to recommend — or perhaps some other single measure in certain industries — is better than a more involved survey. “The goal is clear-cut, actionable, and motivating.” Not so fast!

This is where I part company with Mr. Reichheld.  To the contrary, knowledge of a customer’s willingness to recommend — alone — is not actionable survey data.

Notice some key terms cited earlier in the Reichheld study: “predictor of growth” and “correlated directly”.  A customer’s testimony about their willingness to recommend is not a cause of growth; rather, it’s a predictor since it’s closely correlated to growth, according to the study. (See below for more details on the exact study Reichheld performed.) Both revenue growth rates and the customer’s willingness to recommend are caused by customers’ experiences with the company’s products or services — positive or negative. That is, they both spring from a common source, as shown in the diagram below.

customer-experience-1

For data to be actionable, we have to learn where to take corrective action when goals are not achieved. Knowing a customer is not willing to recommend us does not tell us what root causes need to be addressed. (See Dr. Fred’s article on generating actionable data.) The relationship is not as depicted below. We cannot act on the willingness to recommend directly — except by manipulating a survey and generating questionable data as in the car dealer example.

customer-experience-2

To make this relationship clear, let me turn back to my experiences with my survey training classes. Let’s say that I ask only that recommendation question on my post-workshop survey, phrased correctly. What if I got low scores from a number of people? What would I do? I have no idea! Why? Because the one-question survey instrument design provides no information on what action to take.  Instead, I ask some very specific, very actionable, survey questions about attributes of the survey workshop, e.g., value of the content of various sections, value of the exercises, quality of instruction, and quality of the venue. I also ask people to provide specific details to support their scoring, especially for the weak scores. Combined with follow-up discussions, these data have helped me greatly refine the workshop materials.

net-promoter-primerLet me be fair to Reichheld. At the end of the article he drops some critical pearls of wisdom about Enterprise’s survey system. It’s a phone survey, and information from unhappy customers is forwarded to the responsible branch manager, who then engages in service recovery actions with the customer, followed by root cause identification and resolution.

More importantly, in “A Net-Promoter Primer”, some critical information is presented. (See nearby image.) In addition to the willingness to recommend question that will serve to categorize the respondent, presumably at the start of the survey process, the survey contains “Follow-up questions [that] can help unearth the reasons for customers’ feelings and point to profitable remedies.” These questions should be “tailored to the three categories of customers”, meaning the survey should branch after the categorization question. This critical, practical information is presented in parentheses — yet there is nothing parenthetical about it!

To grow a business, you need to engage a customer feedback program that will predict at a macro level the course of your business. As a micro level, the feedback program must isolate the causes of customer dissatisfaction — and satisfaction. This information is vital to recovering at-risk customers and to performing root cause identification and resolution. It’s the improved business design and operational execution that leads to business growth.

Even Mr. Reichheld agrees There IS More Than The One Number You Need to Grow.

All quotations from “The One Number You Need to Grow,” Frederick Reichheld, Harvard Business Review, December 2003.

Reichheld’s Study Details

Here are more complete details of the study Reichheld and his colleagues at Satmetrix performed according to the article.  Some details are sketchy.

Administered Reicheld’s “Loyalty Acid Test” survey to thousands of people from public lists.  They “recruited” 4000 from these lists to participate.

They got these people to provide a purchase history, and asked when they had made a referral to a friend or colleague. If they didn’t have any referral information, the researchers waited 6-12 months and then asked these questions.

Built 14 “case studies” from the data where sufficient data allowed statistical analysis. Found which survey questions best correlated with repeat purchases or with referrals.

The willingness to recommend question was the best or second-best question in 11 of 14 case studies. Reichheld conjectures that the more tangible question of making a recommendation resonated better with respondents than the more abstract questions about a company deserving a customer’s loyalty.

The exact sequence of the project is a bit hazy here. They then developed a response scale to use with the recommendation question. They chose a 1-to-10 scale ranging from “extremely likely” to “not at all likely.”  It appears they performed cluster analysis – though there’s no mention of any statistics — on the data and found three clusters. “Promoters” would give scores of 9 or 10. “Passively satisfied” would score a 7 or 8 while “Detractors” would score 6 or below.

The next step was to see how well these groups would predict industry growth rates. Satmetrix administered the Recommendation Survey to thousands of people from public lists and correlated the results to companies’ revenue growth rates. Conclusion: no company “has found a way to increase growth without improving its ratio of promoters to detractors.” Again, you improve the ratio by improving the underlying product or service — and you need to know what to improve.