External Validation: Part Five in the Continuing Series, Getting Quality Right

By Cliff Hurst

Over the past two years I have devoted a great deal of my professional attention and academic studies to developing a new model of quality management in call centers called “Getting Quality Right.” This model is based on the realization that there are four vital questions that must be answered in order to get quality right. In this article, we will wrap up our discussion on the first question: How are we, as an organization, doing at representing our company to its customers?

There are four elements to addressing this question:

  • You must monitor a random sampling of calls.
  • You must monitor a sufficiently large number of calls to achieve the degree of precision and accuracy that you desire.
  • The distribution of scores from your sample must approximate a normal distribution.
  • Your monitoring forms must be both reliable and valid.

The Power of Validity: One of the biggest complaints I hear from call centers echoes the famous line from Rodney Dangerfield: “We don’t get no respect around here.”  The rest of the organization doesn’t listen or respond to feedback from the call center. It’s as if you’re not important – until things go wrong. One of the primary reasons for this lack of respect is not adequately establishing the validity of your quality monitoring.

Once you establish the validity of the characteristics you are monitoring – in terms that mean something to both the caller/client and your organization – then you will have the grounds for respect. One of the most important criteria to validate your quality monitoring is the callers’ satisfaction with the calling experience. Various service providers – such as Customer Relationship Metrics – offer sophisticated ways to do this, methods that I embrace.

However, I’d like to propose a simple way to validate your monitoring. I call it “listening from the customer’s point of view.”  One way to do this is to augment the monitoring form with one that addresses the customer’s experience. After all, this is the primary outcome you want to achieve.

Another way is to monitor some calls holistically from the caller’s point of view. Using a simple, four-point Likert scale, you can ask yourself, “Overall, in my opinion, was this caller: delighted, satisfied, mollified, or disgruntled with the calling experience?”

Once a sufficient database of calls has been monitored in this way, determine if there is a correlation between the customer’s point of view and the typical rating criteria already in use. If the correlations are strong, it is an indicator that the usual criteria are valid. If the correlations are weak or nonexistent, this is an indicator that you need to revise your monitoring criteria.

External Quality Monitoring: Let’s explore in more detail what my friends at Customer Relationship Metrics mean by EQM – External Quality Monitoring. It’s a great way to see what you do not see. The longer you have been monitoring calls in a particular way, the harder it will be to see the “blind spots” in your current practice. It is easy (and quite self-deceptive) to believe that you know what is important to callers, and to believe that you measure those things with your monitoring forms. Adding EQM to your quality monitoring is a good way to bring the caller’s experience back into focus.

Your goal through external quality monitoring is to capture the voice of the customer as it relates to the calling experience. There are four ways to do this:

1) Mail surveys

2) Outbound telephone surveys

3) Automated post-call IVR surveys

4) Post-call IVR surveys with expert correction and interpretation of your results

Each way has its advantages and disadvantages. All EQM methods have bias built into their responses; it’s an inherent part of their methodology. That’s why I recommend that you use them in tandem with a randomly sampled internal quality monitoring method.

Telephone surveys have the least bias, but they are the most intrusive of the EQM methods.  Because of that intrusiveness, response rates tend to be low, and bias is introduced.

Mail surveys and post-call IVR surveys are both opt-in mechanisms. Consequently, those customers who participate tend to be either delighted or disgruntled with their experience. That’s not a bad thing. You should be hearing from those callers. In fact, you stand to learn a lot from your happiest and your angriest customers – just don’t mistake their feedback for a representative sample. These methods, used alone, provide an incomplete and inaccurate view of your quality.

Mail surveys are suspect due to the inevitable time lag between the phone contact and the receipt of the mailer. Just how accurately can a caller remember the quality of the interaction they had with a call center agent a week ago? With mail surveys, it becomes harder to isolate the calling experience from the overall experience of price, product, promotion, and delivery of your company’s offering.

The great advantage of post-call IVR surveys is their immediacy. Because it is immediate, caller feedback is quite to the point. The drawback to post-call IVR surveys is that if responses are captured and reported automatically, solely using technology, you will likely get confusing results – because callers don’t always follow directions well.

The combination of post-call IVR surveys with expert interpretation and correction of your data is the royal approach to EQM. Use them in conjunction with a well thought out internal quality monitoring practice (as I’ve described in previous articles) to produce valid and meaningful results.

Read part 4 and part 6 in this series.

Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net.

[From Connection Magazine September 2008]

%d bloggers like this: