Get rid of (or at least modify) those Net Promoter and Satisfaction scores
Stop the presses! Jiffy Lube has, according to Ad Age, made a startling discovery. Jiffy Lube’s Net Promoter Score (NPS), the metric touted as “the one number you need to grow,” was not actually helping it grow. This is how Ad Age explained NPS – “Customers are asked: ‘How likely is it that you would recommend (company name) to a friend or colleague?’ They rate the company on a scale of 1 to 10, and they are then categorized by loyalty. Those who rate 9 or 10 are ‘promoters’; those who give 7 or 8 are ‘passives,’ and those who rate the company from 0 to 6 are ‘detractors.’ NPS is then calculated by subtracting the percentage of detractors from the percentage of promoters. NPS can range from -100 (all detractors) to 100 (all promoters); and companies with ‘the most efficient growth engines’ rate in the 50 to 80 range.” Jiffy Lube evidently had excellent NPS scores (exceptionally positive in some markets), but they were not indicators of the desired customer behavior – recommending the service. Amy Raihill, Jiffy Lube’s Insights Manager, told Ad Age: “At a system level, NPS was simply not a predictor.” Who’d have guessed? I’ll tell you who – Anyone who has ever worked with survey data. NPS, just like its sibling metric Satisfaction (Sat), is what’s known in the trade as a “derivative” metric. If you want insights that allow you to drive growth, you need to ask questions that allow you to understand causality. Start focusing on causality and you will find metrics that provide a much more direct route to improvement. On a website, visit success (based on visit intent) is the most valuable question. NPS and Satisfaction scores are all but irrelevant, except as indicators of trend; and even then they can be misleading. For years, JCPenney tracked satisfaction on its website. The Sat scores were consistently high. But conversion rates never showed much improvement. Only when JCPenney started asking its visitors how successful they had been did they start collecting feedback that pointed to site problems and opened up a path to continuous improvement. Metrics-heavy surveys (which you see with any vendor offering benchmarking data – Foresee, iPerceptions, Satmetrix, etc.) make continuous improvement much more difficult than it needs to be because they typically don’t link their metrics questions to conditional, open-ended questions. Asking someone for a score, no matter what you are trying to measure, is all but pointless unless you also find out why they gave you that score. Metrics have meaning only when they are imbued with context. Only an understanding of causality makes metrics data actionable. And that’s what Jiffy Lube discovered. They took their NPS scores and had an analytics vendor overlay the data against the text-based feedback they collected from their store visitors. You’ll never guess what they discovered. They were able to correlate lower NPS scores with comments describing (primarily) negative experiences. Can you imagine? All that analytical effort, when a simple set of conditional questions would have gotten them to better insights much faster and cheaper. The NPS question reads thus: “How likely is it that you would recommend (company name) to a friend or colleague?” The underlying assumption is that people who “would” recommend actually do. No. People don’t actually behave that way, hence the need for conditional follow-on questions to provide context and causality. Try out this approach. The simplest route to actionable insight on a website is to ask a question about outcome, not emotion. So: “Based on why you came here today, how successful was your visit?” If they score, say, 5 or lower on a 7-point scale, ask: “Please help us understand why you rated your visit less than successful.” (You can make it easy by providing a list of categorical options, but you must always ask for specifics, too.) If you insist on keeping those treasured NPS scores, however, try out this kind of flow: “How likely is it that you would recommend (company name) to a friend or colleague?” If they answer 0 thru 8 (respondents that NPS classifies as Net Detractors or Passives), then the follow up would be: “Please help us understand why you would not recommend us to a friend or colleague.” If they answer 9 or 10 (respondents NPS classifies as Promoters) then you should ask: “How likely is it that you will recommend (company name) to a friend or colleague?”\ If they rate that question anywhere from 0 thru 8, then: “Please help us understand why you feel so positive about us but are unlikely to recommend us?” If they rate the “likelihood to recommend” question 9 or 10, ask: “What specifically about your experience with us drives the likelihood that you will recommend us?” The follow-on questions provide the context and causality that make the metrics understandable and, in aggregate, actionable. So don’t make it as hard for yourself as Jiffy Lube did. Cut back on the metrics questions, especially, the derivative ones, and add open text follow-on questions. Growth will come far easier.
-Roger Beynon, CSO, Usability Sciences Corporation