By Dan Somers
Automation and artificial intelligence (AI) can help save contact center costs, but primarily it increases customer satisfaction by speeding up responses and reducing customer efforts. Contact center automation falls broadly into three categories:
- Speeding up or automating the helpdesk agent (staff who capture and triage queries)
- Speeding up or automating the case handler (staff who resolve queries)
- Increasing self-service automation (chatbots, searchable FAQs, and self-help tools)
Certain limitations of AI cannot guarantee the accuracy expected by customers, however. Some of these limitations are temporary, such as the comprehension capabilities of speech recognition, which will continue to improve. But other limitations relate to how machine-learning robots work.
All machine learning relies on studying real-life training data to predict or classify current data. The training data needs to be “labeled”—that is, it must have an outcome or class (tag) assigned to it, as judged by a person. For example, if a query comes in that says, “My server has crashed and is showing a blank screen,” then the chatbot will assign the best label it has in its training set, which might be “server crashed.”
However, in this example, a label of “faulty screen” might be assigned instead. The customer would be annoyed if the bot attempted to address a faulty screen issue instead of a server crash. This is an example of potential ambiguity. Furthermore, new issues will appear from new product launches, changes in quality, and evolution in the market. Lastly, the way people describe or view the same problem is more variable for certain issues than others.
The only safe way of deploying bots within a contact center is to have a human-in-the-loop. This person will validate what the bots are doing, preferably with minimal impact to the customer.
So, who and where is the human-in-the-loop? It turns out that there are four general ways for humans to validate some or all of the process:
- A helpdesk agent can validate suggested responses before sending.
- The customer can validate that the response—or the question they asked—was comprehended.
- A third-party solution provider can check the performance of the bots and curate the process; this might be an internal or external data science team.
- The knowledge base manager can check the bots for satisfactory performance.
Considerations of Humans-in-the-Loop
There are pros and cons of different human-in-the-loop approaches. Some of these points are technical in nature but have substantial implications.
Agent: Some solutions on the market have AI recommend the next “best response” for the agent. The agents validate the response, not the categorization. For example, if two queries—“The strawberries I bought were tasteless” and “The strawberries I bought made me sick”—both lead to the same recommended response, “We’re sorry; please accept our voucher,” then the categorization models will degrade as they are not being updated with the accurate root cause.
Also, the insight generated by the models won’t allow executives to monitor product quality, design, and usability to then generate the self-service tools that can reduce contact center traffic. With this solution, other humans-in-the-loop will still be required elsewhere.
Customer Validation: If customers provide the required validation, it is scalable, but customers may not like having to correct their original query or the responses. If the query produces a new category, then there must be a process to deal with it. Fundamentally, the system cannot be relied upon with just these humans-in-the-loop.
Solution Provider: This is the status quo for most machine-learning deployments in real-world environments: a data science team, either internally or a third-party, sets up, curates, and retrains the models on a regular basis to maintain their performance. The pros are that these are the only humans-in-the-loop required. The cons are that these professionals are in short supply.
Knowledge Base Manager: This role has the most hidden potential benefit for having a human-in-the-loop. In a nontechnical environment, they will provide business rules on how to handle queries, as well as the training, trouble-shooting guides, and fault tree analysis to resolve issues.
In terms of their day-to-day role, they will be aware of product launches and modifications, but they also can use the rich insight of the labels coming from the contact center (both triage and resolution) to make improvements to both the knowledge base and the process. This includes updating the FAQs so customers can better use self-service. Also, this insight can inform other functions, such as product quality, product design, and customer experience, to help guide improvements.
A new approach that only requires a few humans-in-the-loop can exist because of a new technology called optimized learning. This is a form of machine learning that builds models but invites training from a human in such a way to minimize human input and still provide maximum performance. It is ideal for spotting new signals and improving existing ones.
Optimized learning doesn’t need to be in-line and suffers from none of the downsides of other approaches. Instead, it requires a fraction of the labeling otherwise required, even in a changing environment. The implications of this are profound. It means that a call center would only need to retain a few agents after the automation implementation, and they would handle the training that the optimized learning invited them to do in an offline capacity. This would maintain the models for labeling queries to generate both automation and insight, thus speeding up and reducing issues.
The rest of the automation would come from the rules originating from the knowledge base manager, as informed by the bots. This paves the way for improving chatbots and self-serve, searchable FAQs to free up contact center staff.
Automation of contact centers yields promise, although not without humans-in-the-loop to maintain its performance. There are many different flavors for human-in-the-loop AI automation. With new technology appearing, an optimized system is possible with a minimum number of humans who don’t need any data science skills. There is now no reason why the contact center of the future needs to look like those of the present. The same applies for the customer experience too.
Dan Somers is the CEO of Warwick Analytics, which provides call center automation solutions to address voice of customer (VoC) data, chatbots, service desks, and complaint handling.