AICEX: Per misurare l’esperienza in maniera adeguata devi affidarti a più metriche.
There’s been a recent uptick in people asking me about Customer Effort Score (CES), so I thought I’d share my thoughts in this post.
As I’ve written in the past, no metric is the ultimate question (not even Net Promoter Score). So CES isn’t a panacea. Even the Temkin Experience Ratings isn’t the answer to your customer experience (CX) prayers.
The choice of a metric isn’t the cornerstone to great CX. Instead, how companies use this type of information is what separates CX leaders from their underperforming peers. In our report, the State of CX Metrics, we identify four characteristics that make CX metrics efforts successful: Consistent,Impactful, Integrated, and Continuous. When we used these elements to evaluate 200 large companies, only 12% had strong CX metrics programs.
Should we use CES and how does it relate to NPS? I hear this type of question all the time. Let me start my answer by examining the four types of things that CX metrics measure: interactions, perceptions, attitudes, and behaviors.
CES is a perception measure while NPS is an attitudinal measure. In general, perception measurements are better for evaluating individual interactions. So CES might be better suited for a transactional survey while NPS may be better suited for a relationship survey. You can read a lot that I’ve written about NPS on our NPS resource page.
Now, on to CES. I like the concept, but not the execution. As part of our Temkin Experience Ratings, we examine all three aspects of experience—functional, accessible, and emotional. The accessible element examines how easy a company is to work with. I highly encourage companies to dedicate significant resources to becoming easier to work with and removing obstacles that make customers struggle.
But CES uses an oddly worded question: How much effort did you personally have to put forth to handle your request? (Note: In newer versions of the methodology, they have improved the language and scaling of the question). This version of the question goes against a couple of my criteria for good survey design:
- It doesn’t sound human. Can you imagine a real person asking that question? One key to good survey design is that questions should sound natural.
- It can be interpreted in multiple ways. If a customer tries to do something online, but can’t, did they put forth a lot of effort? How much effort does it take to move a mouse and push some keys?!? Another key to good survey design is to have questions that can only be interpreted in one way.
If you like the notion of CES (measuring how easy or hard something is to do), then I suggest that you ask a more straight forward question? How about: How easy did you find it to <FILL IN THING>? And let customers pick a response on a scale between “very easy” and “very difficult.”
My last thought is not about CES, but more about where the world of metrics is heading. In the future, organizations will collect data from interactions and correlate them with future behaviors (like loyalty), using predictive analytics to bypass all of these intermediary metrics. Don’t throw away all of your metrics today, but consider this direction in your long-term plans.
The bottom line: There is no such thing as a perfect metric.
AICEX Customer Experience Italian Association