Chatbots Should Learn from the Errors of IVR


Chatbots could follow the path of IVR, a once-promising technology that earned customer ire through poor implementation

By Peter Lyle DeHaan, PhD

Author Peter Lyle DeHaan

I don’t often use web chat because I find a phone call is faster and more thorough. Recently I made an exception and learned a valuable lesson.

The email said that my new statement was available online. I might be one of the few people who still download and review online statements, but that’s what I do. So I logged in and navigated to the right page. I clicked on the link for my most recent statement, but it brought up last month’s. With more navigation, I found a list of all my statements. Alas, my current statement wasn’t there.

About this time a chat invitation popped up. “I see you’ve been notified that your new statement is available. Can I help you?”

Without giving it enough thought, I typed in, “I can’t download my statement.”

Immediately I received a reply. “Here are two resources that might help you out.”

By the titles of these links, I knew they were pointing me in the wrong direction, telling me what I already knew. I tried again. “No, my current statement isn’t available.”

Again, the chatbot responded immediately. “Here are three links that might help you resolve the problem.”

Once again, the links wouldn’t help. What started as an amusing experience with technology was becoming exasperating. Then I typed, “Can I talk with a person?”

The bot responded immediately, “I can help you.”

Obviously the bot wasn’t interested in connecting me with a real person. I typed in what I thought: “You’re worthless.” (Though I’ve never said that to a person, I often say that to technology.)

But before I could close the chat window, I got another message. “Let me connect you with a representative.”

With a potential for help only seconds away, I stuck around. A half minute later, Lisa popped up in the chat window.

Unfortunately my failed chatbot experience agitated me, similar to what happens after a futile interaction with IVR. At this point, emotion, rather than logic, dictated my first question: “Are you a person or a bot?”

Lisa assured me she was a real person. We then worked to download my statement. She had me try a different method to get to my statement, but that didn’t work either. I pasted the error message into the chat window for her to see. Then she had me try a different browser. I got the same results.

As we continued, I noticed a subtle change on the statement page. First, the proper link appeared, but it still didn’t work. A little while later the link worked. Then I recalled a problem I had with my bank a few years ago. They would send out the email that my statement was available, even though the department responsible for putting it online hadn’t finished their work. The two groups weren’t communicating.

I realized that the same thing had happened with this company. Expecting the statement to be online by a certain time, the email group sent out a notice, not knowing the statement wasn’t available.

This, of course, brings up another all-too-common scenario: a company causes customer service activity by their own actions. But that’s a topic we’ve already covered.

The point today is that chatbots are part of an exciting technology that can help call centers better serve customers, as well as help agents do their job better. Yet the improper application of chatbot technology threatens its utility by alienating the customers it’s supposed to help.

This is exactly what happened with the introduction of IVR, and that technology never recovered. May chatbots have a different outcome. Both the call center and its customers need this one to be a win.

Peter Lyle DeHaan, PhD, is the publisher and editor-in-chief of Connections Magazine. He’s a passionate wordsmith whose goal is to change the world one word at a time.  Read more of his articles at PeterDeHaanPublishing.com.

%d bloggers like this: