Six Steps to Implementing a Contact Center Quality Program

By Greg Bush

From a customer experience perspective, a quality interaction is everything. How many times have you spoken with contact center agents and wondered if they understood what you really needed? Did they get your order correct? Were they even listening?

While we’ve all had these experiences and we all agree quality should be the focus of any contact center, it’s not as easy as telling your agents do ten things on the checklist. Below are six critical steps needed to successfully implement a quality program that will influence customer experience.

1) Think about Customer Experience First: Habit number two from 7 Habits of Highly Effective People by Stephen Covey is “Begin with the end in mind.” We must think first about the kind of customer experience we desire. What is the purpose of your contact center? Is it sales, customer service, help desk support, or maybe a combination? What do customers hope to get when they contact you? Are they ordering a product or service? Do they need help with a product they have already purchased? When you think about the types of reasons customers contact you and the purpose of your contact center, you can begin to understand what makes up a quality customer experience. Also, do not limit your quality program to phone calls. Your program should be multi-channel, including all customer interactions such as email, chat, and social media.

2) Agent Acceptance: When implementing a quality program, it’s important to have the buy-in from those doing the job. Get your agents involved in deciding what quality looks like from the beginning. Agents should have input in the development of scripting, what will be evaluated, and the evaluation process. Form a task force comprised of a few agents, supervisors, the contact center manager, and the training manager. Participating agents will become advocates for the program to the rest of the agents. In addition, the agents on the task force will gain a better understanding of management’s expectations, and other agents will feel their voice is being represented in the process.

3) Training: Don’t assume that agents understand and know how to incorporate all the criteria in the new quality program. During the rollout process, it’s not only important to communicate what the new quality program looks like, but training should be given on all aspects of the quality criteria. During this time it’s important to communicate the expectations of a quality interaction and how often agents will be evaluated.

4) Calibration: Once the quality criteria are agreed on, you’ll need to make sure that the evaluators are consistent. Anyone who will be performing evaluations (supervisors, mangers, trainers, or the quality team) should review and discuss customer interactions to ensure that the evaluation process is standardized. It’s best to have the participants review several calls on their own and submit their evaluation sheets and comments before the meeting. This will prevent anyone from falling victim to peer pressure and changing his or her evaluation during the meeting and then not apply the same process later. During the calibration meeting, everyone should listen to the calls and review their evaluation notes. The outcome of this process should yield clear guidelines regarding the evaluation process.

5) Ongoing Training: The purpose of implementing a quality process is to ensure that each customer has the same experience by standardizing the agent interaction. Those evaluating agents will need to provide feedback and training on the steps the agents need to take to improve. If there are several agents needing training in the same areas, then it’s usually best to involve training. Another good way to train and further gain agent buy-in is by implementing a mentor program. Have your best agents mentor the ones that need help (often this will be new hires). The agent receiving the help will feel more comfortable with their peer, and the mentor will become more accountable to the quality process.

6) Recognition: Recognition is the lifeblood of contact centers. You can never provide too much recognition and rewards to keep your staff motivated and reinforce positive behavior. You’ll need to design a recognition program around the quality process. You can do this by recognizing the top performers in quality over a set period, such as a month or quarter.

A fun way is give spot recognition when you observe a great customer interaction. The recognition does not have to be monetary in value; however, it should be public. One way to publically recognize quality is to email a file of the call and specifically state in the email the areas agent did well and how it influenced customer experience. In addition to recognizing the great performance of the agent, you’re also providing a good example for others to hear and implement.

While a new quality program will not prevent bad customer experiences, the goal is to minimize them. A quality program that is embraced by everyone involved is your first step in creating a better experience for customers. Following these steps to introduce a new quality program to your contact center will not only create a better customer experience, but it will also provide a better experience for everyone on your team.

Greg Bush is a call center executive with over fifteen years of industry experience. His background includes both sales and customer service. He is experienced in call center start-up and turnaround, driving revenue by placing a strong focus on best practices and innovative technology. You can contact Greg at gbush73@gmail.com or 972-822-9283.

[From Connection Magazine Jan/Feb 2013]

Improving Quality of Experience while Achieving ROI

By Tim Moynihan

Customer experience is critical for long-term business success. However, it’s inevitable that a business’ communication system is bound to experience unpredictable technical issues and dropped calls. While glitches will sometimes happen, users still expect the highest level of quality and will no longer tolerate service “hiccups.” Poor customer experience should be an exception, not the norm, and it should not be a contributor to revenue loss.

Businesses that strive to differentiate themselves by providing a great Quality of Experience (QoE) often recognize a strong return on investment (ROI) from increased customer retention and lower revenue leakage rates. Smart businesses know that in order to continuously assure customer satisfaction, it’s necessary to implement end-to-end performance monitoring.

Before moving forward with a performance monitoring implementation, executives typically must understand the financial return prior to committing funds. It’s easy to say that a breakdown in a business communication system can result in lost customers, but many organizations prefer a more concrete analysis.

To illustrate just how quickly the costs associated with technology failures can add up, consider these quick computations associated with interactive voice response (IVR) failures or poor voice quality (as highlighted in Figure 1). Until an organization goes through this exercise, it’s hard to comprehend just how big of an issue this is.

Costs Associated with Technology Failures

Cost of IVR Failures =           (average cost per call handled by an agent
– average cost per call contained by IVR)
x calls sent to agents as a result of IVR outages

Cost of poor voice quality =    percent of calls extended due to poor voice quality
x average additional talk time in minutes
x average call cost minute

Cost of back-office delays =   percent of calls affected by back-end issues
x average length of delay in minutes
x average call cost minute

Cost of misdirected calls =     percent of calls incorrectly transferred
x average cost per transfer

Figure 1: These calculations do not include costs
associated with diagnosing and correcting problems.

Tips for Achieving Positive QoE and ROI

  1. Preempt issues and ensure peak performance: A major financial services corporation saved three million dollars by reducing voice quality issues and improving its automated response system, resulting in the generation of tremendous returns for both customers and the bottom line.
  2. Resolve database back-end and automated response issues: An insurance company generated 1.5 million dollars in savings after monitoring their network and responding to glitches affecting customer service.
  3. Address automated response and call routing systems: A major transportation company saved four million dollars by addressing issues associated with its automated self-service solutions and call routing systems, after an end-to-end monitoring solution isolated the source of the issues.

Advances in communication and enterprise technologies should enable organizations to streamline processes and enhance the customer experience, not burden customers with dropped calls or long wait times. By implementing an end-to-end monitoring solution, organizations have greater visibility across today’s complex environments. This enables businesses to reduce the time it takes to understand the source of a problem and fix it before their customers even notice the glitch.

Tim Moynihan is vice president of marketing at Empirix.

[From Connection Magazine November 2012]

Quality Redesign to Drive Business Success

By Linda Duba

So, you’ve just assumed the role of quality manager and have been tasked with redesigning the quality program, including the monitoring form. Where do you begin? How do you proceed? What steps do you take to ensure the new program meets everyone’s needs, from the agent to the executive?

These are challenging questions that require a thoughtful and structured approach. The right quality program has the potential to transform your contact center into a powerhouse that drives business results. This transformation begins by changing the quality culture from a “monitor” environment to one of focused and proactive change management. The following quality redesign road map is based on my work with thousands of contact centers of all sizes:

Establish a Strategic Vision: One of the keys to building an effective quality program is knowing what to measure and how those metrics help drive business success. An executive sponsor can help you establish a strategic vision for your program based on corporate objectives. This guidance will drive your process and inform design. Your sponsor can define the following key business initiatives and business goals:

Overall business vision and brand

  • What is the vision statement, and how is this vision practiced within the organization on a daily basis?
  • What message or image does the business strive to project to callers and employees?

Market share and/or customer acquisition

  • What are the specific initiatives planned to acquire new customers?
  • How does the contact center support those plans?

Revenue

  • What are the current revenue sources and channels?
  • How does the business extend or expand the customer relationship?
  • Are there new strategies to increase revenue per customer?
  • How can the contact center support those strategies?

Customer Satisfaction

  • How is satisfaction measured?
  • Is “net promoter” a key measure?
  • Is FCR (first call resolution) a key business metric? What comprises FCR?
  • Does customer satisfaction and feedback drive organization change?
  • Is there a “problem incidence” or “service recovery process” metric?

These corporate objectives are the basis for your strategic vision, a simple statement of how the quality program will support these goals. For example, “The ABC Company’s customer service quality program ensures that the customer service organization achieves the highest levels of customer satisfaction while meeting our growth objectives and operating in the most efficient manner possible.”

Assess Your Current Program: The next step is understanding how agents, supervisors, and managers feel about the quality program and how well the program supports the strategic vision. You may find that supervisors as well as agents are somewhat mistrustful of the current monitoring process and evaluation form. Agents may feel like they’re being watched or corrected. Supervisors may struggle with using the monitoring form and helping agents improve their performance versus just monitoring for errors. In many cases, you will also find misalignment between strategic objectives and the specific behaviors and skills being evaluated.

Establish New Quality Guidelines: This is the stage to establish a “quality council” comprised of key stakeholders in the quality process: agents, supervisors, trainers, sales coaches, and managers. Members will gather peer feedback that they can bring to the table, as well as draw upon their own experiences and ideas. Through a series of meetings and feedback sessions, the council will review the strategic vision and begin the process of translating that vision into pragmatic quality components.

Map behaviors that relate to each of the business initiatives and goals, such as:

  • Sales effectiveness
  • Problem identification, ownership, and resolution
  • Effective listening
  • Relationship-building and rapport
  • Courtesy
  • Empathy
  • Effective call management

Document the “need to have” components based on:

  • Legal or regulatory requirements
  • Customer security or privacy
  • The brand (if branding drives the business)

Assess all components using these statements:

  • Are they actionable? Will they lead to better business results, process change, or customer satisfaction?
  • Can they be objectively defined?
  • If they can be defined, are there tactical ways to coach for improvement?
  • Will they help identify and close process and satisfaction gaps?
  • Will they help motivate agents, supervisors, coaches, and trainers towards continuous quality improvement?

Finally, the council develops themes and groups the accepted components within these themes, such as:

  • Opening, greeting, and customer verification
  • Probing and problem identification
  • Fundamentals and policy or process adherence
  • Finesse and soft skills
  • Sales effectiveness and expanding the relationship
  • Closure and ensuring customer satisfaction

Design the New Monitoring Form: Now it’s time to design the actual quality monitoring form. Start by placing the strategic vision on the form; it will serve as a reminder and reinforcement to quality, coaching, and agent development.

Next, build the questions, statements, and supporting coaching points that will be used for measurement. Ensure that the question or statement has supporting coaching points within the form where possible. Refer to your assessment criteria to clearly define each question or statement. Make sure that the sections and questions provide a logical flow. Agents and quality analysts will provide valuable insight here, since they service or listen to calls on a daily basis. Having an effective flow will save time for quality analysts.

Finally, determine the importance of each section and the questions and statements within those sections. Refer to your strategic business initiatives to help make these scoring decisions. Look for critical questions that support compliance, regulatory, and customer satisfaction objectives. Weight the form according to the behaviors you wish to drive and can support with positive coaching.

Communicate Change: Few organizations put enough emphasis on the critical communications step. Providing information updates throughout the development and design process goes a long way in making the program a success. Use bulletin boards, Intranet sites, team meetings, and “desk drops” to keep employees informed of various stages and decisions being made. The more they know, the more likely they are to process the changes and accept them. Also, ensure that they have feedback channels.

Design a launch process that incorporates a “nesting” phase for the new form. Use parallel testing with both the existing and new form to understand how agent performance is affected. This also allows agents, supervisors, coaches, and analysts to adapt to the changes.

Provide Coaching: John Wooden, former UCLA basketball coach and the “Wizard of Westwood,” said, “A coach is someone who can give correction without causing resentment.” The entire purpose of evaluation and measurement is to improve – and improvement happens in the direct feedback, encouragement, and guidance provided between agent and supervisor or quality coach. The best coaching practices involve both “virtual” and “sit-down” coaching.

Virtual coaching gives the agent the opportunity to receive written feedback and review evaluations and call recordings on their own, as well as conduct self-assessments. This is a time to listen, consider, and reflect, and it creates an openness to truly “hear” the feedback and learn from it.

When virtual coaching is followed by a sit-down meeting, the impact grows tremendously. Great coaching is a dialogue, not a lecture. The agent is receptive and learning and growth happen at a dramatic rate. And when the coaching is firmly rooted in important objectives for the company, it becomes an experience of working as a team towards common goals versus an interpersonal and subjective evaluation.

Summary: The road map laid out in this document will help you transform an existing quality program into a strategic asset. Remember that quality is a journey with great challenges and tremendous potential rewards. Employ these principles to avoid a bumpy ride and guide your contact center to gain the utmost value from this important process.

Linda Duba is a business professional with over twenty-five years of contact center experience supporting services operations, training, project management, branding, and customer experience management for a worldwide financial services company. Linda is viewed as an expert problem solver, negotiator, presenter, and customer-focused individual who is able to forge solid relationships across an organization, as well as with strategic partners with expertise in consensus building across multiple organizational groups and levels.

[From Connection Magazine March 2012]

Call Center Quality Becomes a Key Differentiator

By Dana Allender

For many years, when an organization turned to a call center outsourcer to make and take phone calls on its behalf, there was just one major consideration: cost. Times have changed, however, and now organizations are beginning to see call centers less as a necessary expense and more as a strategic avenue through which they can engage and retain customers.

With today’s hard economic times, companies are looking for teleservices providers that can not only deliver expertise but also reflect the true culture of their company. The bottom line is that companies don’t need a call center; they need a solution to assist them in achieving their business objectives. The solution needs to be flexible, scalable, and easily deployed, while generating quantifiable and measurable results.

No longer do companies want a call center that just handles its calls; they want a strategic partner that serves as an extension of their organization and understands the specific needs of its customers and callers. Using a call center that focuses on quality and strives for 100 percent customer satisfaction gives an organization an advantage over competitors who take a cost-savings approach to their phone-based marketing and customer service. However, the call center must be able to do more than just talk a good game; it must be able to deliver the results it promises.

People expect performance right out of the gate; they need to see how it affects their profits. The outsource call center must prove that it can successfully execute strategy and produce results at or above their clients’ expectations, whether it is generating new sales, reducing costs, or retaining customers. The outsource call center partner should be brought to the table at the beginning so that they understand client goals and objectives and also are able to provide insights from their years of experience about how to achieve those objectives and overcome any challenges.

One issue for marketers today is that consumers want to be reached in specific, individualized ways. Therefore, to maximize its marketing and customer care efforts, an organization must have an in-depth understanding of its customer base.  A good call center can provide a wealth of experience in targeted marketing to different segments of the market based on multiple demographics.

Businesses and consumers don’t want to be “mass-marketed,” but in the past it’s been cost prohibitive to reach every one of them individually. Now, however, with each person having his or her own unique ways of wanting to be communicated with, it’s up to each company to uncover those ways and deliver them.

In today’s economy, where competition for customer is extremely competitive, the strategic use of call centers can provide a company with far-reaching benefits to achieve its goals and find profitable solutions to its unique business problems.

Dana Allender is the director of business development for the teleservices company, InfoCision Management Corporation.

[From Connection Magazine January 2010]

How Call Center Quality Differs from Manufacturing Quality Control

Part Eight in the Continuing Series, Getting Quality Right

By Cliff Hurst

In past articles, we’ve used terms that were perhaps unfamiliar to many, such as control charts, standard deviation, normal distribution, and correlations. These may not be familiar to call center professionals, but they are well known to people with green or black belts in Six Sigma and experts in lean manufacturing, TQM, QMS, ISO 9000, and Baldrige criteria, which define various approaches towards quality management.

Unfortunately, those specialists seldom intersect with the call center. Call centers, after all, are different. However, there are ways to begin to bridge the gap between these two worlds. In this article, I’d like to start a dialogue about how to begin doing just that.

Call centers speak a language that is foreign to other industries. Only when you know what makes call centers distinct can you engage in meaningful dialogue with other quality practitioners.

There are three principal functions that make call centers different. We must understand the ramifications of these differences if we are going to apply standard precepts of quality within our call centers:

Variation at the source: In our call centers we live with an immense variation of “raw materials” at the source. Our raw materials are phone calls – and each is unique. If your center handles 160,000 calls per month and each has an equal chance of being handled by 100 different agents, that’s 16 million possible combinations of caller/agent interaction. That’s a lot of variation – and there is little you can do to reduce it.

Quality specialists from outside the call center may not know how to deal with the wide variations and approximate measures that are a necessary part of our daily life. That’s why, in order to really “get quality right,” I am writing this series. My goal is to combine the general precepts of quality with those of survey research and adapt both to the unique environment of call centers. My intent here is not to take fault with the precepts of quality management; I embrace those precepts wholeheartedly. I simply feel that their application must be adapted to the unique environment of call centers.

Quality is delivered immediately: If you’re familiar with call centers, this point is obvious, but remember the point of view of quality practitioners from a manufacturing background. In manufacturing, there is a production sequence, and that sequence can be interrupted to make quality improvements at any stage along the way until the final product is finished.

During a phone call, however, quality is delivered from start to finish with only rare opportunities to intervene in real time. This fact requires adjustments in our approach to quality management. Commonly accepted practices stemming from a manufacturing model of quality assume that “production” occurs on time and in stages, each of which can be influenced by interventions of one sort or another, such as checks for quality. Not so with call centers.

What goes on during the call is beyond our ability to directly influence. In call centers, management’s ability to control the quality of a call is dependent upon what is done before and after the call. In call centers, your best tools for quality include hiring wisely, training well, providing user-friendly technology, and offering agents coaching and monitoring feedback.

Nondestructive sampling: With today’s recording systems, most call centers have immense flexibility to sample “raw materials” at the source – and after the fact. Plus, unlike many manufacturers, we don’t have to destroy the samples in order to inspect them. This is not always the case in manufacturing. A primary function of quality sampling in manufacturing is for the purpose of what is known as “acceptance sampling.”  Acceptance sampling happens when the parts, or raw materials, are received before the production process begins. Sometimes manufacturers have to destroy batches of material in order to inspect them.

Our situation is different. Call center acceptance sampling is not an option for us. We can’t “reject” calls that we don’t want to deal with. Furthermore, since quality in a call center is delivered in real time, our only opportunity to monitor quality is after the event is over. (Live monitoring for the purpose of coaching is another topic for a later time.)

Given the widespread adoption of call recording technology, we can capture call samples and later analyze them for quality to our hearts’ content. Doing this has no adverse impact on the quality of the call.

This is where we need to shed our habitual ways of doing things. Monitoring forms, once developed, tend to take on a life of their own. It’s easy to get lulled into the mindset that all we have to do to achieve quality is to score our forms in some consistent way, but that’s only part of what we need to do.

Even more important is looking for other trends in our data. As long as we have a representative sample of calls that have been recorded and archived, we can perform all sorts of analyses on the sample. And we will have the same confidence in our results as we have in our monitoring scores.

The most valuable answers may be those that aren’t even on the monitoring form. For example, a common call center goal is to keep average handle time as short as can be reasonably expected. Toward that end, call centers often set up various efficiency metrics related to average talk time and hold time. However, what if longer calls tend to result in higher monitoring scores and higher caller satisfaction scores?  Could striving for efficiency be defeating other, higher purposes?

As long as you have a representative sample of calls for the month, all you need to do is run a scatterplot and correlation analysis between talk time and quality scores. Have you ever correlated quality scores with the delay in answering those calls?  Here again, a scatterplot and correlation analysis can reveal the consequences of the lengthy average speed of answer in a way that typical metrics do not reveal. Do you really want to help your agents achieve better monitoring scores?  Well, the best way to do that may be to staff more robustly for peak volumes.

Read part 7 in this series.

Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net. You can sign up for his free email newsletter or order his book, Your Pivotal Role: Frontline Leadership in the Call Center, at www.careerimpact.net.

[From Connection Magazine May 2009]

Deming’s Contribution to Call Center Quality

Part Seven in the Continuing Series, Getting Quality Right

By Cliff Hurst

In part one of this ongoing series, we posed four vital questions to be addressed in any effective quality-monitoring program:

1)   How are we, as an organization, doing at representing our call center?

2)   What can we, as an organization, do to get better at representing our call center?

3)   How is this particular agent doing at representing our call center?

4)   What can we, as managers, do to help this agent to get better at representing our call center?

In the November issue, we began looking at the second question, by considering boundaries and starting points. With this as a foundation, we are ready to consider the contributions of Edward Deming relevant to this discussion. Although first published in 1986, Deming’s classic book Out of the Crisis has stood the test of time. Deming’s most famous prescriptions for business are contained in his fourteen points and his seven deadly sins of management. (If you haven’t read the book, just do a Google search for “Deming 14 Points.” There you’ll find several good introductions to his precepts.)  Deming said that, in his experience, 94 percent of all improvement opportunities come from answering vital question two and only 6 percent from answering the third one.

Try This: Carve out an hour from your busy schedule. Take a copy of Deming’s fourteen points and his seven deadly management sins and spend an hour reflecting on what each of those points might mean to you and your call center. No matter how busy you are, you’ll find this a valuable use of your time. Ponder these questions:

  • What will it mean if you remove the barriers that rob your call center reps from their right to pride in workmanship?
  • What are the ramifications of Deming’s claim that “the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the workforce?”
  • What would it mean to eliminate work targets or standards based on such metrics as calls per hour, average talk time, or after call work?

Think about what this means. If you are focusing most of your improvement efforts from your monitoring program at the point of giving feedback to individual agents, you are missing the biggest part of your improvement opportunities.

 Put First Things First: While individual feedback is important to give, it is secondary, not primary. This new model helps keep first things first. It is in answering vital question two that you stand to gain the most from Getting Quality Right. With a consistent system in place for answering question one, you can prioritize your efforts as you dig into question two. Without answering question one first, it’s easy become distracted about improvement that really won’t make much difference in overall quality.

Time Series View of Data:  Deming considered a time series view of sampled data as important, as it is to most practitioners of total quality management. We can learn a lot from studying quality scores through this lens. This is because quality tends to vary over time.

The key measures we have used so far in analyzing distributed data include the mean, median, range, skewness, and standard deviation. These results are typically represented visually in a histogram. This approach to analysis has many benefits; however, it hides variations within that time frame.

In viewing a month’s worth of data, you can learn a lot about how you did for the month, but not about what went on within shorter periods. Therefore, you’ll benefit from looking at a supplementary point of view that shows the time dimension. That’s what a run chart and its close cousin, a control chart, can do.

Run Charts: Your randomly selected calls are already identified by a date/time stamp. All you need to do is sort them by time and date and draw a run chart. A run chart gives you a visual impression of how variation happens over time. Excel, Minitab, SPSS, and other statistical programs make it easy to generate run charts.

If you plot a month’s worth of sampled calls, you’re going to have a busy run chart. That’s okay; it’s worth considering. You can also simplify it by looking at shorter periods, or by combining data points into subgroups. You can gain more meaning when you view the data through a control chart.

Control Charts: Control charts are more complex than run charts. They allow you to convert your impressions into a quantitatively verifiable form from which you can draw conclusions. Control charts are like run charts on steroids.

Be aware that control charts come in a variety of confusing types. In addition, different authors tend to use slightly different terms for the various types. Therefore, wading into this subject is like wandering into a swamp, but it’s worth it because control charts allow you to convert your visual impressions into a statistically verifiable form.

For a classic text on this subject, see Kaoru Ishikawa’s Guide to Quality Control. At first glance, the many formulas are forbidding, but stay with him. His explanations are clear, though technical.

A Very Brief Overview: Control charts work by overlaying additional information upon run charts. The most typical of those are two boundaries known as Upper Control Limits (UCL) and Lower Control Limits (LCL). These limits are mathematically derived statistical boundaries of the variation in your data; they are not boundaries that you set by management fiat.

Control limits show you, given the variation in your quality scores as viewed over time, the range within which you can expect to see those scores vary. If all of your scores fall within the control limits, then your call handling is known to be in statistical control. Your scores may be in control, but you still may not like the wide range over which they vary. Alternatively, you may be dissatisfied with the low level of your average scores. If either is the case, it is time to address your training, coaching, or scoring processes.

This is what answering vital question two is all about. Process improvement in quality call monitoring is largely about two things: reduce variation and raise overall performance.

Most likely, you’ll find some scores that fall outside of the control limits. Those calls are known as outliers. Any call below the LCL is one that was scored low. Any call above the UCL was scored high. If a scored call is outside of those limits, you will want to investigate what went on with that call. You may want to recheck your evaluator’s scores. Perhaps someone should evaluate the same call independently as a check on your calibration standards.

Once you confirm that your evaluator’s scores are valid, you may want to bring that call to that agent’s attention (or supervisor’s) attention. If it is high, this could be a really well handled call, worthy of praise. If low, remedial training or coaching of that rep may be needed.

Outliers are just one way to use control charts to identify special causes in variation. You can also use control charts as “early warning indicators” that there is something about the types of calls you are getting, or the way that reps are handling calls, that is changing over time. There are a variety of statistical rules of thumb that you can use when studying data points in a control chart to determine this. Some things to look for are a run of data points heading in the same direction or points alternating about the central tendency line. Also, you’ll want to scrutinize a cluster of scores that are higher or lower than the central line.

In Conclusion: At this point, with a time series view of data, run charts, and control charts as analysis tools, we are well on our way to answering vital question two: What can we, as an organization, do to get better at representing our call center?

In the next installment of this series, we will elaborate upon the differences in quality as applied to call centers.

Read part 6 and part 8 in this series.

Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net for his free email newsletter or order his book, Your Pivotal Role: Frontline Leadership in the Call Center.

[From Connection Magazine January 2009]

Systems, Boundaries, and Starting Points in the Call Center

Part Six in the Continuing Series, Getting Quality Right

By Cliff Hurst

Let’s review: In the first installment of this series we asked, “Why monitor calls?” This initial question prompted four additional questions to be addressed in any effective quality monitoring program.

1)   How are we, as an organization, doing at representing our call center?

2)   What can we, as an organization, do to get better at representing our call center?

3)   How is this particular agent doing at representing our call center?

4)   What can we, as managers, do to help this agent to get better at representing our call center?

To address the first question, we uncovered four elements:

  • Select a random sample of calls for monitoring
  • Select a sufficient sample size to achieve the intended level of accuracy and precision
  • Monitor calls using processes and a QM form that are reliable and valid
  • Achieve results that are approximately normal in their distribution

Now we can move on to the second question, “What can we, as an organization, do to get better at representing our call center?” The answer to that question starts with systems theory. Some systems are closed, and others are open.

Closed Systems: The most commonly recognized example of a closed system is a thermostat. The thermostat takes feedback in the form of temperature. It acts on that feedback to turn on the heat or air conditioning when needed until the temperature reaches a predetermined level, and then the thermostat turns off the heat or air conditioning. The thermostat is a control mechanism, using a single-loop feedback to keep the variation in temperature within a narrow range. A closed system operates all by itself and responds to only one variable. Within call centers, a simple ACD that distributes calls to the next available agent is an example of a closed system.

Open Systems: Most processes in a call center, however, are not so simple and are better described as open systems. Open systems operate under the influence of feedback from multiple variables. They also interact with, change, and are changed by those same variables; they interact with their environment, which is a larger system of which they are a part.

Take a workforce management system, for instance. You begin with calculations based on historical call volumes and call lengths. You then adjust those calculations by forecasts that take into account nonhistorical variables. Unfortunately, even the best plans can be thrown off by many things. When this occurs, call volumes can unexpectedly spike, and service levels plummet. In an open system, the notion of control goes out the window. Think of all the mutually influencing variables involved, and you will see that a quality monitoring process is best described as an open system.

With the theory of open systems in mind, we recognize that the best we can accomplish through quality monitoring is the ability to influence the outcome of the system. To think that we can control the outcome is foolish. Unfortunately, because the notion of control is ingrained as a management principle, understanding the influential nature of an open system is more difficult that it first appears

Boundaries: Systems have both boundaries and starting conditions. As typically practiced, the boundaries of quality call monitoring extend from “hello” to “good-bye.” If your sole purpose in monitoring calls is to evaluate agent performance, then this is an acceptable boundary. However, to answer question number two, “What can we, as an organization, do to get better at representing our call center?” broader boundaries are called for.

To extend the starting boundary, consider including the speed of the answer and the caller’s experience with your IVR. Likewise, after “good-bye,” the caller’s experience continues. How promptly was the order or request processed? How accurate was it? To extend the boundaries even farther, consider the reason that prompted the call in the first place.

In reflecting on what we might learn from the self-destruction of the outbound B-to-C industry in the United States, I realized that a broad conception of the boundaries of the outbound telemarketing “system” in this country would encompass the entire population of U.S. households. This population can be thought of as a “common pool” resource that needed to be marketed with sustainability in mind; it wasn’t.

Starting Conditions: An open system is extremely sensitive to initial conditions. Change the initial conditions of a system even slightly, and the outcomes may vary wildly and unpredictably. For call centers, one starting condition that may influence the quality of a caller’s experience is the length of time that the caller has had to wait in queue before reaching a live agent. Managing the queue is a day-to-day challenge for many call center managers. Instead of directly lobbying for an increased head count, perhaps a more effective approach would be to build a rational argument based on a statistically sound analysis of the consequences of keeping callers in queue.

Queue time is just one example of the importance of initial starting conditions on quality scores. Here are a few others:

IVR: What if you have just implemented a new IVR menu – or installed an IVR? That’s a change in starting conditions. What impact does that change have on quality scores?

Shift bidding: Suppose you practice shift bidding and your experienced reps prefer to work days. This leaves your newest agents working at night. You can determine if there is a correlation between longevity and quality scores, using those results to determine whether it might be worthwhile to provide additional supervision, coaching, or training to newer reps working at night.

Cyclical events: Suppose monthly statements or mailings are sent out. What happens the week after a redesigned form is mailed?

In your continuing quest for getting quality right, begin to view your call center as an open system, expand your understanding its boundaries, and factor in the starting conditions. The result will be the ability to better represent your call center.

Read part 5 and part 7 in this series.

Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net.

[From Connection Magazine November 2008]

External Validation: Part Five in the Continuing Series, Getting Quality Right

By Cliff Hurst

Over the past two years I have devoted a great deal of my professional attention and academic studies to developing a new model of quality management in call centers called “Getting Quality Right.” This model is based on the realization that there are four vital questions that must be answered in order to get quality right. In this article, we will wrap up our discussion on the first question: How are we, as an organization, doing at representing our company to its customers?

There are four elements to addressing this question:

  • You must monitor a random sampling of calls.
  • You must monitor a sufficiently large number of calls to achieve the degree of precision and accuracy that you desire.
  • The distribution of scores from your sample must approximate a normal distribution.
  • Your monitoring forms must be both reliable and valid.

The Power of Validity: One of the biggest complaints I hear from call centers echoes the famous line from Rodney Dangerfield: “We don’t get no respect around here.”  The rest of the organization doesn’t listen or respond to feedback from the call center. It’s as if you’re not important – until things go wrong. One of the primary reasons for this lack of respect is not adequately establishing the validity of your quality monitoring.

Once you establish the validity of the characteristics you are monitoring – in terms that mean something to both the caller/client and your organization – then you will have the grounds for respect. One of the most important criteria to validate your quality monitoring is the callers’ satisfaction with the calling experience. Various service providers – such as Customer Relationship Metrics – offer sophisticated ways to do this, methods that I embrace.

However, I’d like to propose a simple way to validate your monitoring. I call it “listening from the customer’s point of view.”  One way to do this is to augment the monitoring form with one that addresses the customer’s experience. After all, this is the primary outcome you want to achieve.

Another way is to monitor some calls holistically from the caller’s point of view. Using a simple, four-point Likert scale, you can ask yourself, “Overall, in my opinion, was this caller: delighted, satisfied, mollified, or disgruntled with the calling experience?”

Once a sufficient database of calls has been monitored in this way, determine if there is a correlation between the customer’s point of view and the typical rating criteria already in use. If the correlations are strong, it is an indicator that the usual criteria are valid. If the correlations are weak or nonexistent, this is an indicator that you need to revise your monitoring criteria.

External Quality Monitoring: Let’s explore in more detail what my friends at Customer Relationship Metrics mean by EQM – External Quality Monitoring. It’s a great way to see what you do not see. The longer you have been monitoring calls in a particular way, the harder it will be to see the “blind spots” in your current practice. It is easy (and quite self-deceptive) to believe that you know what is important to callers, and to believe that you measure those things with your monitoring forms. Adding EQM to your quality monitoring is a good way to bring the caller’s experience back into focus.

Your goal through external quality monitoring is to capture the voice of the customer as it relates to the calling experience. There are four ways to do this:

1) Mail surveys

2) Outbound telephone surveys

3) Automated post-call IVR surveys

4) Post-call IVR surveys with expert correction and interpretation of your results

Each way has its advantages and disadvantages. All EQM methods have bias built into their responses; it’s an inherent part of their methodology. That’s why I recommend that you use them in tandem with a randomly sampled internal quality monitoring method.

Telephone surveys have the least bias, but they are the most intrusive of the EQM methods.  Because of that intrusiveness, response rates tend to be low, and bias is introduced.

Mail surveys and post-call IVR surveys are both opt-in mechanisms. Consequently, those customers who participate tend to be either delighted or disgruntled with their experience. That’s not a bad thing. You should be hearing from those callers. In fact, you stand to learn a lot from your happiest and your angriest customers – just don’t mistake their feedback for a representative sample. These methods, used alone, provide an incomplete and inaccurate view of your quality.

Mail surveys are suspect due to the inevitable time lag between the phone contact and the receipt of the mailer. Just how accurately can a caller remember the quality of the interaction they had with a call center agent a week ago? With mail surveys, it becomes harder to isolate the calling experience from the overall experience of price, product, promotion, and delivery of your company’s offering.

The great advantage of post-call IVR surveys is their immediacy. Because it is immediate, caller feedback is quite to the point. The drawback to post-call IVR surveys is that if responses are captured and reported automatically, solely using technology, you will likely get confusing results – because callers don’t always follow directions well.

The combination of post-call IVR surveys with expert interpretation and correction of your data is the royal approach to EQM. Use them in conjunction with a well thought out internal quality monitoring practice (as I’ve described in previous articles) to produce valid and meaningful results.

Read part 4 and part 6 in this series.

Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net.

[From Connection Magazine September 2008]

Measurement, Reliability, and Validity: Part Four in the continuing series, Getting Quality Right

By Cliff Hurst

In conducting a statistical analysis, it is important to understand the level of measurement being used, how reliable it is, and if it is valid. These issues will be addressed in this article.

Measurement can occur at four levels: nominal, ordinal, interval, and ratio. For our purposes, we’ll treat interval and ratio data the same.

Nominal Data assigns numbers to names since software can only crunch numbers, not names. Suppose a call center operates two shifts a day. For analytic purposes, you may want to differentiate between shifts as you monitor the scores. To enter data into a statistical software package, you will assign a number to “day” and a number to “evening.” This is called coding the data.

You may define the day shift as “1” and the night shift as “2.” This doesn’t mean that “2” is higher than “1” in a judgmental sense or arithmetic sense. In nominal data, numbers simply become placeholders for names.

Ordinal Data signifies good, bad, and variations thereof. Ordinal means that there is an order to the numeric rating. It is good practice to code better ratings with higher numbers and lower ratings with lower numbers.

Let’s say you want to monitor whether agents verify a caller’s identity before disclosing confidential information. If the caller provides his or her last name and account number and these match your records, then proper verification has been made. Call monitoring will evoke either a “yes” or a “no” determination. To follow common practice, assign the number “1” to “yes” and “0” to “no.”

In another situation, you might want to evaluate professional courtesy using a Likert scale of 1 to 5, where 1 means not acceptable, 2 means below average, 3 means average, 4 means above average, and 5 means excellent. You will be making an extrinsic judgment because you are looking for “shades of gray.” Scores for this will also be measured at the ordinal level of measurement because “excellent” is better than “average,” and so forth. There is an order to the rankings.

This is where confusion between ordinal and interval levels of measurement can creep in. It appears that the intervals are built upon a five-point scale. However, this is not an interval level of measurement – it is ordinal because there are no standard increments of measurement within the scale that was used. The difference between “average” and “above average” can only be qualitatively determined.

Interval/Ratio Data: If you want to get more granular in your analysis, you can develop an interval or ratio scale as a subset of the category of professional courtesy. The following is a far-fetched example, not a recommendation; I am only illustrating the statistical principle.

Suppose you decide that the more the agent says “thank you,” the more professional courtesy is displayed. Evaluating the number of times the agent says “thank you” gives an interval/ratio level of measurement. You can code this part of the form with a “0” if the agent does not say thank you at all, “1” if the agent says it once, “2” if the agent says it twice, “3” if the agent says it three times, and “4” if the agent says it four times, ad nauseam.

The type of statistical analysis that your data requires is determined by whether it is nominal, ordinal, or interval/ratio.

Calibration involves using a set of proven statistical and analytical tools to measure how reliable and how valid your quality monitoring process is. Although these are often lumped into one category, these are actually two distinct components:

1. Reliability addresses consistency.  Does your quality monitoring form allow your QA team to measure things consistently? Would different evaluators likely assign the same score to the same call? Does the team score similar calls similarly over time, or do they tend to drift apart in their scoring practices? These are the questions you must answer when establishing the reliability of your scoring forms. There are four different kinds of reliability:

Inter-rater reliability measures how similarly different evaluators rate the same call when they score it.

Test-retest reliability tracks whether the same evaluators rate the same call in a consistent way if they were to rate the same call again.

Parallel forms reliability ascertains whether one version of a form is at least as reliable as, or more reliable than, another.

Internal consistency reliability makes sure you are not “double-dipping” among what you think are distinct categories on your form.

2. Validity assesses whether your measurements are appropriate, meaningful, and useful. Validity is more difficult to quantify than reliability. There are three types of validity: content, criterion, and construct.

Content validity determines whether the things you measure are really an accurate reflection of what you intend to measure. We tend to do this pretty well in terms of the greeting, the closing, and accuracy of data entry.  It is harder to measure “soft” areas such as courtesy, professionalism, and tone of voice.

Criterion validity determines how well the criteria we use in our monitoring forms correlate with other measures of customer satisfaction, such as post-call IVR surveys, written or phone surveys, measures of first call resolution, escalations, accuracy of data entry, customer retention, and even financial measures like goodwill, credits, average collection period, and returns. It’s important not to create and use monitoring forms in a vacuum, removed from these other performance measures.

Construct validity can be difficult to get right. One example that misses its mark is something that I see quite often. Many monitoring forms ask, “Did the CSR use the customer’s name three times during this call?” It seems like a good measure of customer-focus, but it really isn’t. Callers do not generally count the number of times their name is used during the call.

As an industry, we ought to get better at construct validity. For example, I propose that the best indicator of courteousness and professionalism is whether CSRs acknowledge the reason for the call or the emotional state of that caller before asking for verifying information. This truly makes the caller feel heard and valued. Yet that acknowledgement is seldom included on monitoring forms.

A thorough comparison of customer survey results, correlated with assorted monitoring criteria, can assist us to determine authoritatively what elements really contribute to customer-focus, professionalism, and courtesy rather than relying on conjecture. This sort of thorough analysis, within the overall context of quality assurance will lead to the next improvement in call center management: “getting quality right.”

Read part 3 and part 5 in this series.

Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net.

[From Connection Magazine Jul/Aug 2008]

Getting Quality Right – Part 3: Motivation and Judgment: More than Statistics

By Cliff Hurst

Statistics are important to call center quality, and it is critical to apply sound statistical precepts to our work. However, stats are only part of the picture; by themselves they won’t allow you to “get quality right.” Let’s look at the principles behind a meaningful approach to call center quality management.

Motivation at Work: The results from quality monitoring practices will be determined – and constrained – by the prevailing paradigms of the organization’s leaders regarding people’s motivation to work.

I advocate a view of human motivation which holds that people seek work that is challenging and helps them grow. Doing good work is inherently satisfying. It’s the job of leadership (the job of a quality program) to help agents bring the best of who they are to what they do. This perspective is the bedrock of quality monitoring and performance feedback practices. The roots of this perspective lie in the field of humanistic psychology.

A different school of psychological thought is that of behaviorism. Pure behaviorists say that the only reason people work is to gain the external rewards offered by their employer – pay, benefits, and so forth. Behaviorist theory leads to an approach of quality monitoring by focusing on consequences – incentives for performing well and punishments for not doing well. Although punishment and rewards make a strong seasoning, a little goes a long way, while too much ruins the whole meal. Behaviorism, in its various manifestations, should play only a supporting role in quality programs. Its role is at the edges, not at the core. Behaviorism, at most, is the buffing wheel on the statue of call center performance; it is neither the chisel nor the sculptor.

Three Types of Judgment: A question heard often from call center quality managers is, “What should our monitoring form look like?” Everyone is seeking the best monitoring form. There’s even a book with sixty sample forms in it. It makes an interesting read, but don’t lean on it too heavily, because there is no such thing as a best monitoring form.

There are, however, a small handful of precepts that will enable you to create a form that best serves your purposes at your call center, because these are based on what is important to you and to your callers. With these precepts, you can be assured that your form will be both valid and reliable.

When you monitor according to valid and reliable criteria, your agents, your supervisors, and your senior managers will have greater confidence that the monitoring scores you report really mean something valuable.

So, how do you develop a monitoring form that gives you valid and reliable data? There are several steps involved. The first step is to recognize that there are three types of judgment involved in evaluating a call: the systemic, the extrinsic, and the intrinsic modes of valuation. These three terms provide a helpful lens through which to see the different ways to make value judgments.

Systemic evaluation is the realm of “yes/no” judgment. A specific part of the monitored call is whether the agent did or did not do or say something that was required; there are no shades of gray in systemic evaluation. Systemic components of a call are the easiest to select when monitoring and the easiest to calibrate among different raters. They are most helpful in identifying baseline criteria that all calls must achieve. They are not useful, however, when you seek to “raise the bar” to higher levels of engagement with the caller. To do that, you need to evaluate extrinsically.

Extrinsic evaluation is the realm of varying degrees of fulfillment of a concept. This is the world of “good, better, best.” Here there are shades of gray; you might use various types of Likert scales in the evaluation process. Let’s say that one of the criteria that you decide to evaluate through monitoring is the degree of professional courtesy exhibited by your agents. You may decide to rate these degrees of courtesy in the form of a one-to-five scale, where one is unacceptable, two is below average, three is average, four is above average, and five is excellent. Extrinsic evaluations comprise the greater part of most call monitoring forms. They are more difficult to calibrate than systemic evaluations, as the scores are more subjective and prone to argument between analysts and agents.

Intrinsic evaluation is a different animal altogether. Intrinsic evaluation plays a major role in coaching but only a minor role in call monitoring. This will be addressed in a future article. Most of what we do when we monitor calls is to analyze them by breaking the call into its component parts. What an intrinsic perspective gives us is a reminder to look at the call as an entity. Ask yourself, “On the whole, from the caller’s point of view, how well was this call handled?”  Ask, “Was this customer delighted, satisfied, mollified, or disgruntled with the process and the outcome of the call?”  The answer you get from an intrinsic evaluation may be quite surprising when you compare it to the rest of your analysis. The parts do not always add up to the whole.

Read part 2 and part 4 in this series.

Cliff Hurst is president of Career Impact, Inc, which he started in 1988. Contact Cliff at 207-499-0141, 800-813-8105, or cliff@careerimpact.net.

[From Connection Magazine June 2008]