Loading...
The Usability Blog
A Practical Guide to User Experience Insights

What Do You Measure?

Three key questions: What do you measure? Why do you measure it?  How do you measure it? Look at these numbers. They come from six e-commerce sites.  They represent metrics collected from the sites’ most important visitor segment – those who arrive intending to buy. 83% of potential purchasers who report a positive change in overall site satisfaction are likely to recommend the site (scores of 9 and 10 in the Net Promoter scale of 0 – 10). But only 11% of those who report a negative change in site satisfaction will do so.

Picture-1-for-Ben-Sat-change

If, as the Net Promoter methodology maintains, likelihood to recommend scores represent the best indicator of business health, site visits that result in a negative change in overall satisfaction have a highly detrimental effect on the brand’s health, as well as that of the channel itself. If your site pursues a program of continuous improvement, metrics that measure both overall site satisfaction and visit success tell you where to focus your improvement efforts.  Success measures the outcome of the visit.  Satisfaction measures the visitor’s emotional reaction to the experience.  Satisfaction is the emotion that follows an outcome.  Satisfaction scores indicate how personally the visitor has taken the experience. Look at how much more extreme the reaction is when visitors experience a negative change in overall site satisfaction, compared to that of visitors who simply fail in their attempt to buy.  81% of potential buyers who report a successful visit say they are likely to recommend the site – a reaction similar to that of the positive sat change population.  But look how much less punitive the response is for failed visitors in comparison to those who had a negative sat change.

Picture2-for-Ben-sat-change

Negative sat change produces a virulent reaction.  It undermines not only conversion but brand equity and future financial performance.  So it is essential that a site be able to identify when and to whom it happens.  To detect a change, the site must measure satisfaction at two points in time.  When Usability Sciences started in the online survey business in 2000, our first customer was P&G.  They schooled us in how to obtain the most valuable visit data.  Best practice, according to the world’s most sophisticated market research user, is to ask a metrics question on entering the site and again on leaving.  Any delta between those two values provides a clear and invariably productive analytical avenue into the data. Taking a satisfaction score only at the end of a visit provides no indication of change.  And change, especially negative change, is the data point of greatest analytical value, IF the analyst can mine the visit data to determine causality – the why. This is where the survey technology must complement the survey methodology.  A survey technology that captures as much information about causality (open-ended questions) and context (behavioral data capturing navigation, content accessed, tool use, and decision-making) as possible will yield the greatest insights from experiences that result in changes to satisfaction.  Analysis is triage – there are only so many hypotheses an analyst can investigate in the time available.  So it is important that the analyst knows where to look and then be able to ascertain causality as efficiently and as confidently as possible.  The greater the supporting contextual information, the more confident analysts can be in their conclusions. If, therefore, a site takes the time and effort to collect visitor survey data, it should maximize the return on that investment by making sure its methodology and its technology provide visibility into the root causes of negative change.  Only when the site understands causality at the root can action be taken that eliminates problems permanently.

–Roger Beynon, CSO, Usability Sciences

 

BECOME A PAID TEST PARTICIPANT

Sign up to become a Paid Test Participant.

Sign UP Now

We have revised our Privacy Policy
as of August 19, 2014

CLIENT TESTIMONIALS

“From beginning to end, everyone I interacted with from Usability Sciences was professional and thorough. I was impressed with the testing technology, the methodology and especially the team that led the project. This is one of the most impactful pieces of research I have ever delivered to my team. Thank you!”

Kevin King
Senior Director of Digital Media, A&E Television Networks

“USC managed tight timelines and a client team that was tough to wrangle, But more importantly, the quality of the work was exemplary. It's work I would hold up as "the way we should do things" and share as a case study across the organization.”

Group Product Director
Digital Marketing, Pharmaceutical Company