Weighing Productivity and Quality to Assess Agent Performance

By Bill Price and Villette Nolon

Contact centers today are all about empowering agents, investing in their skill development, and reducing escalating attrition. Yet most centers still overly rely upon “the old ways,” involving hard, technical metrics that emphasize speed and quantity to track and manage agent performance.

CM Jan 2008 #11-1It has become all too easy to judge those agents who have the highest Contacts Per Hour (CPH), lowest Minutes Per Incident (MPI), or highest Sales Per Hour (SPH) as the “best” performers, based solely on Average Handle Time (AHT). Figure 1 illustrates the typical stack-ranking of agents based on productivity.

Unfortunately, productivity is not all that matters to callers. Speed metrics need to be counterbalanced with quality measurements that are much harder to collect, and even harder to associate, with specific contacts.

One of these is real-time customer satisfaction. Right now, you can measure and assign quality to each agent and his or her contact in three ways:

1. Most contact center Quality Assurance (QA) teams currently use standard score sheets to assess agent call or email quality. These are calibrated by supervisors and total five to twelve contacts per month. This is a good sampling, but it can be hit or miss, which doesn’t necessarily reflect caller perceptions.

2. Very few contact centers measure post-contact caller satisfaction via an email survey, launched within minutes after a phone or email interaction. This method, our favorite, asks a maximum of four questions about the callers’ reactions. This sample is much broader than in the previous method, but is very timely.

3. Very few contact centers track actual caller actions after the interaction to find out if they actually placed the order, remained a customer, or purchased more – the ideal scenario.

CM Jan 2008 #11-2The next step in the process of evaluating quality is to answer these questions: How good a job did the agent do? Did he or she solve the problem or conduct a successful sale? Not until we collect and balance the “soft,” quality side can we get a true picture of agent performance.

Viewed in this new way, some scary facts of support center life are often unveiled as Figure 2 illustrates. In this matrix, we see that some of the “fastest” agents do a poor job, causing repeat contacts or even losing customers (see coordinates for agents 1 and 3). In contrast, some of the “slowest” performers, who might lose their jobs because they can’t keep up with CPH or SPH standards, are doing a great job with callers and delivering outstanding experiences (see agent 11).

The new balanced scorecard produces significantly better results on both axes – productivity (speed element) and quality (soft side) – helping agents better understand where to improve. Once call centers collect and report on such balanced performance data, like those shown in Figure 3, they can identify “best agents” (blue), those “at risk” (red), those “miscast” (brown), and those “on the bubble” (green).

CM Jan 2008 #11-3So, what action plans can you next implement to drive all agents to the ideal, blue upper left-hand corner of the agent performance management matrix?

Best agents (agent 2): Recognize and reward them, but also find out how they can be so fast and successful. They probably don’t follow the norms set in training. Use their expertise to improve your training program.

At-risk agents (agents 1 and 3): Slow them down and build quality responses (i.e., add capacity while they aren’t handling as many calls or producing as much work), targeting them to match agent 5, and then arc to meet agent 4.

Miscast agents (agent 11): Find something else for them to do, since you won’t be able to get them to work faster. These people typically make great trainers, QA staff, or mentors who can help junior agents.

On-the-bubble agents (agent 12 and even 10): Ask what happened and find the right path for them, since they might be a drain on the team’s spirit. Perhaps they were assigned to the wrong team leader. They may also signal a need to improve your hiring criteria.

In the end, managing agent performance is all about finding the right balance between technical and quality metrics to help your call center achieve greater levels of productivity and quality. Old technical metrics may be easy to measure, but eventually it is the caller who holds the key to your success. Measuring caller satisfaction in real time is what is going to help you project a true, accurate picture of the performance of your support team.

Bill Price is founder, president, and CEO of Driva Solutions LLC, a strategic consulting and operational implementation services firm serving the global customer contact industry. He can be reached at info@drivasolutions.com. Villette Nolon is president and CEO of NetReflector Inc., a Seattle-based provider of customer satisfaction measurement solutions that uses online survey technology. She can be reached at villetten@netreflector.com.

[From Connection Magazine January 2008]

One thought on “Weighing Productivity and Quality to Assess Agent Performance

  1. Pingback: The Jan/Feb 2008 Issue of Connections Magazine | Connections Magazine Blog

Leave a Reply