How Call Center Quality Differs from Manufacturing Quality Control

Part Eight in the Continuing Series, Getting Quality Right

By Cliff Hurst

In past articles, we’ve used terms that were perhaps unfamiliar to many, such as control charts, standard deviation, normal distribution, and correlations. These may not be familiar to call center professionals, but they are well known to people with green or black belts in Six Sigma and experts in lean manufacturing, TQM, QMS, ISO 9000, and Baldrige criteria, which define various approaches towards quality management.

Unfortunately, those specialists seldom intersect with the call center. Call centers, after all, are different. However, there are ways to begin to bridge the gap between these two worlds. In this article, I’d like to start a dialogue about how to begin doing just that.

Call centers speak a language that is foreign to other industries. Only when you know what makes call centers distinct can you engage in meaningful dialogue with other quality practitioners.

There are three principal functions that make call centers different. We must understand the ramifications of these differences if we are going to apply standard precepts of quality within our call centers:

Variation at the source: In our call centers we live with an immense variation of “raw materials” at the source. Our raw materials are phone calls – and each is unique. If your center handles 160,000 calls per month and each has an equal chance of being handled by 100 different agents, that’s 16 million possible combinations of caller/agent interaction. That’s a lot of variation – and there is little you can do to reduce it.

Quality specialists from outside the call center may not know how to deal with the wide variations and approximate measures that are a necessary part of our daily life. That’s why, in order to really “get quality right,” I am writing this series. My goal is to combine the general precepts of quality with those of survey research and adapt both to the unique environment of call centers. My intent here is not to take fault with the precepts of quality management; I embrace those precepts wholeheartedly. I simply feel that their application must be adapted to the unique environment of call centers.

Quality is delivered immediately: If you’re familiar with call centers, this point is obvious, but remember the point of view of quality practitioners from a manufacturing background. In manufacturing, there is a production sequence, and that sequence can be interrupted to make quality improvements at any stage along the way until the final product is finished.

During a phone call, however, quality is delivered from start to finish with only rare opportunities to intervene in real time. This fact requires adjustments in our approach to quality management. Commonly accepted practices stemming from a manufacturing model of quality assume that “production” occurs on time and in stages, each of which can be influenced by interventions of one sort or another, such as checks for quality. Not so with call centers.

What goes on during the call is beyond our ability to directly influence. In call centers, management’s ability to control the quality of a call is dependent upon what is done before and after the call. In call centers, your best tools for quality include hiring wisely, training well, providing user-friendly technology, and offering agents coaching and monitoring feedback.

Nondestructive sampling: With today’s recording systems, most call centers have immense flexibility to sample “raw materials” at the source – and after the fact. Plus, unlike many manufacturers, we don’t have to destroy the samples in order to inspect them. This is not always the case in manufacturing. A primary function of quality sampling in manufacturing is for the purpose of what is known as “acceptance sampling.”  Acceptance sampling happens when the parts, or raw materials, are received before the production process begins. Sometimes manufacturers have to destroy batches of material in order to inspect them.

Our situation is different. Call center acceptance sampling is not an option for us. We can’t “reject” calls that we don’t want to deal with. Furthermore, since quality in a call center is delivered in real time, our only opportunity to monitor quality is after the event is over. (Live monitoring for the purpose of coaching is another topic for a later time.)

Given the widespread adoption of call recording technology, we can capture call samples and later analyze them for quality to our hearts’ content. Doing this has no adverse impact on the quality of the call.

This is where we need to shed our habitual ways of doing things. Monitoring forms, once developed, tend to take on a life of their own. It’s easy to get lulled into the mindset that all we have to do to achieve quality is to score our forms in some consistent way, but that’s only part of what we need to do.

Even more important is looking for other trends in our data. As long as we have a representative sample of calls that have been recorded and archived, we can perform all sorts of analyses on the sample. And we will have the same confidence in our results as we have in our monitoring scores.

The most valuable answers may be those that aren’t even on the monitoring form. For example, a common call center goal is to keep average handle time as short as can be reasonably expected. Toward that end, call centers often set up various efficiency metrics related to average talk time and hold time. However, what if longer calls tend to result in higher monitoring scores and higher caller satisfaction scores?  Could striving for efficiency be defeating other, higher purposes?

As long as you have a representative sample of calls for the month, all you need to do is run a scatterplot and correlation analysis between talk time and quality scores. Have you ever correlated quality scores with the delay in answering those calls?  Here again, a scatterplot and correlation analysis can reveal the consequences of the lengthy average speed of answer in a way that typical metrics do not reveal. Do you really want to help your agents achieve better monitoring scores?  Well, the best way to do that may be to staff more robustly for peak volumes.

Read part 7 in this series.

Cliff Hurst is president of Career Impact, Inc. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net.

[From Connection Magazine May 2009]

%d bloggers like this: