Call Monitoring Isn’t Enough

By Dr. Jodie Monger

Are you ever asked how well your call center is serving your clients and callers? Many centers rely on a summary of operational metrics with the assumption that measuring certain metric levels answers this critical question. In other centers, quality monitoring scores are used to answer this question.

If your monitoring program is like most, you have to conclude that most callers are extremely satisfied by the telephonic service experience. Scores naturally migrate to the upper part of the monitoring scoring scale. If you have 100 points available, the majority of your scores are probably 92 or higher, or even 95 and higher. If so, then you essentially use the top 10 points on the scale. When measurement tools are structured in such a way that accurate assessment is impaired, they are called biased.

When attempting to answer the service quality question, basing such an important assessment on quality monitoring when it has the bias mentioned above diminishes the effectiveness of the response. Let’s review your quality monitoring program and begin the evolution toward providing a better answer. Who is doing the monitoring? Avoid asking the fox to guard the chicken coop. What items are scored? It’s best to focus your monitoring form on objective issues related to call control, providing the correct response, and effective relationship building criteria. Why shouldn’t the monitoring form include callers’ subjective assessments? Guessing at how the caller perceived the experience is not accurate and contributes to the inflation of the monitoring scores.

The caller is the best one to answer how their experience went. From a scientific standpoint you should immediately assess the level of service delivered on a particular call. While this rating appears to be subjective because it is not a hard metric such as ASA (average speed to answer) or a monitoring score related to the effectiveness of the response from the company’s perspective, the callers’ perceptions are the reality that we must deal with in our centers. If your callers and clients are not satisfied, all of those metrics are meaningless. Yet, if you know how the callers perceive the service delivered and you have a good set of metrics and monitoring scores, the answer of how well your center is performing becomes balanced and valid.

Customer Relationship Metrics conducted a research project that provided proof that monitoring scores do not equal the callers’ perception of service. The monitoring form included 17 items, seven of which could be directly compared to the caller evaluations. We examined the monitor and caller evaluations over a five-month period. As presented in the table below, there was virtually no relationship between the caller evaluation and the monitoring evaluation. The only statistically significant relationship was related to perceived interest in helping and tone, although this was not a strong relationship.

The results of this research had a dramatic effect on the center’s quality program. The proof from the callers’ perspective that the call monitoring form was not effective underscored the need to have a valid answer to how well service was delivered. In addition to a better answer, a significant savings was now possible.

The original monitoring program included 17 items scored per call, five per month for 2000 agents. This equated to 170,000 scores given per month, with four completed per hour, taking 2,500 hours (not including the feedback time). To complete 2,500 hours of scoring, 63 full-time equivalents (FTEs) were used at $45,000 per year for a grand total of $2.8 million (again, without feedback and coaching time). With the results of this research, the monitoring form was revamped to focus on objective measures. Scoring eight items allowed six to be completed per hour, requiring 43 FTEs at $45,000 per year for a net personnel cost of $1.89 million. The improvement in the process yielded a savings of $910,000.

Your own situation may be on a smaller scale, however the relationship of the direct benefit would apply. Savings from the actual time spent on scoring is compounded by the result of having a more effective definition of quality. Your three part answer needs to include: 1. call metrics, 2. quality monitoring, and 3. an immediate evaluation by the caller regarding the call.

Dr. Jodie Monger is the President of Customer Relationship Metrics, L.C. Prior to joining Metrics, she was the founding Associate Director of Purdue University’s Center for Customer-Driven Quality.

[From Connection MagazineJan/Feb 2004]

%d bloggers like this: