Part Seven in the Continuing Series, Getting Quality Right
By Cliff Hurst
In part one of this ongoing series, we posed four vital questions to be addressed in any effective quality-monitoring program:
1) How are we, as an organization, doing at representing our call center?
2) What can we, as an organization, do to get better at representing our call center?
3) How is this particular agent doing at representing our call center?
4) What can we, as managers, do to help this agent to get better at representing our call center?
In the November issue, we began looking at the second question, by considering boundaries and starting points. With this as a foundation, we are ready to consider the contributions of Edward Deming relevant to this discussion. Although first published in 1986, Deming’s classic book Out of the Crisis has stood the test of time. Deming’s most famous prescriptions for business are contained in his fourteen points and his seven deadly sins of management. (If you haven’t read the book, just do a Google search for “Deming 14 Points.” There you’ll find several good introductions to his precepts.) Deming said that, in his experience, 94 percent of all improvement opportunities come from answering vital question two and only 6 percent from answering the third one.
Try This: Carve out an hour from your busy schedule. Take a copy of Deming’s fourteen points and his seven deadly management sins and spend an hour reflecting on what each of those points might mean to you and your call center. No matter how busy you are, you’ll find this a valuable use of your time. Ponder these questions:
- What will it mean if you remove the barriers that rob your call center reps from their right to pride in workmanship?
- What are the ramifications of Deming’s claim that “the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the workforce?”
- What would it mean to eliminate work targets or standards based on such metrics as calls per hour, average talk time, or after call work?
Think about what this means. If you are focusing most of your improvement efforts from your monitoring program at the point of giving feedback to individual agents, you are missing the biggest part of your improvement opportunities.
Put First Things First: While individual feedback is important to give, it is secondary, not primary. This new model helps keep first things first. It is in answering vital question two that you stand to gain the most from Getting Quality Right. With a consistent system in place for answering question one, you can prioritize your efforts as you dig into question two. Without answering question one first, it’s easy become distracted about improvement that really won’t make much difference in overall quality.
Time Series View of Data: Deming considered a time series view of sampled data as important, as it is to most practitioners of total quality management. We can learn a lot from studying quality scores through this lens. This is because quality tends to vary over time.
The key measures we have used so far in analyzing distributed data include the mean, median, range, skewness, and standard deviation. These results are typically represented visually in a histogram. This approach to analysis has many benefits; however, it hides variations within that time frame.
In viewing a month’s worth of data, you can learn a lot about how you did for the month, but not about what went on within shorter periods. Therefore, you’ll benefit from looking at a supplementary point of view that shows the time dimension. That’s what a run chart and its close cousin, a control chart, can do.
Run Charts: Your randomly selected calls are already identified by a date/time stamp. All you need to do is sort them by time and date and draw a run chart. A run chart gives you a visual impression of how variation happens over time. Excel, Minitab, SPSS, and other statistical programs make it easy to generate run charts.
If you plot a month’s worth of sampled calls, you’re going to have a busy run chart. That’s okay; it’s worth considering. You can also simplify it by looking at shorter periods, or by combining data points into subgroups. You can gain more meaning when you view the data through a control chart.
Control Charts: Control charts are more complex than run charts. They allow you to convert your impressions into a quantitatively verifiable form from which you can draw conclusions. Control charts are like run charts on steroids.
Be aware that control charts come in a variety of confusing types. In addition, different authors tend to use slightly different terms for the various types. Therefore, wading into this subject is like wandering into a swamp, but it’s worth it because control charts allow you to convert your visual impressions into a statistically verifiable form.
For a classic text on this subject, see Kaoru Ishikawa’s Guide to Quality Control. At first glance, the many formulas are forbidding, but stay with him. His explanations are clear, though technical.
A Very Brief Overview: Control charts work by overlaying additional information upon run charts. The most typical of those are two boundaries known as Upper Control Limits (UCL) and Lower Control Limits (LCL). These limits are mathematically derived statistical boundaries of the variation in your data; they are not boundaries that you set by management fiat.
Control limits show you, given the variation in your quality scores as viewed over time, the range within which you can expect to see those scores vary. If all of your scores fall within the control limits, then your call handling is known to be in statistical control. Your scores may be in control, but you still may not like the wide range over which they vary. Alternatively, you may be dissatisfied with the low level of your average scores. If either is the case, it is time to address your training, coaching, or scoring processes.
This is what answering vital question two is all about. Process improvement in quality call monitoring is largely about two things: reduce variation and raise overall performance.
Most likely, you’ll find some scores that fall outside of the control limits. Those calls are known as outliers. Any call below the LCL is one that was scored low. Any call above the UCL was scored high. If a scored call is outside of those limits, you will want to investigate what went on with that call. You may want to recheck your evaluator’s scores. Perhaps someone should evaluate the same call independently as a check on your calibration standards.
Once you confirm that your evaluator’s scores are valid, you may want to bring that call to that agent’s attention (or supervisor’s) attention. If it is high, this could be a really well handled call, worthy of praise. If low, remedial training or coaching of that rep may be needed.
Outliers are just one way to use control charts to identify special causes in variation. You can also use control charts as “early warning indicators” that there is something about the types of calls you are getting, or the way that reps are handling calls, that is changing over time. There are a variety of statistical rules of thumb that you can use when studying data points in a control chart to determine this. Some things to look for are a run of data points heading in the same direction or points alternating about the central tendency line. Also, you’ll want to scrutinize a cluster of scores that are higher or lower than the central line.
In Conclusion: At this point, with a time series view of data, run charts, and control charts as analysis tools, we are well on our way to answering vital question two: What can we, as an organization, do to get better at representing our call center?
In the next installment of this series, we will elaborate upon the differences in quality as applied to call centers.
Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or email@example.com for his free email newsletter or order his book, Your Pivotal Role: Frontline Leadership in the Call Center.
[From Connection Magazine – January 2009]