Remember the Oscar-winning movie Lost in Translation by Sofia Coppola? Bill Murray was elderly movie star Bob Harris, who travels to Tokyo to perform for a whisky TV spot. Although the director of the spot gives him many detailed instructions in Japanese, the translator shortens everything to one sentence – the rest of the communication is lost. What does this movie have in common with the rise of chatbots? Well, quite a lot.
2017 is the year of the bots, and many companies are trying to jump on the train, creating showcases to illustrate that they’ve not missed the “new trend”. From Amazon’s Lex, the technology that powers the virtual assistant Alexa, to the integrated news, shopping and weather bots in Facebook Messenger, the conversation with a ‘roboter’ seems to be the next big thing, and perhaps soon our most common way of communication, especially in the customer service field.
This is definitely a good thing, as it educates commercial and technology people alike how to build, use and train this technology while simultaneously reducing barriers of acceptance with decision makers. These immediate benefits, however, may bear a price in the future.
On one hand, by introducing bots, more focus in customer interaction will be directed towards chat and messenger interfaces. Research indicates that consumers who prefer to use those channels cause an increase of up to 20% in transaction volume. Some might argue that this isn’t a critical development, because the volume increase is more than offset by the cost savings of automation.
On the other hand, it’s fair to assume that automation will never reach 100%, and each transaction that’s not fully managed automatically leads to an escalation in manual transaction. At this stage, it becomes obvious that an isolated bot approach creates challenges. Consumers will rightly expect that the escalation process of a first automated dialogue will build on the information that the consumer has given in the first place. Lengthy questions and repetition will cause frustration and eventually reduce acceptance of automated resolution.
One example of last year’s chatbot fails was the Facebook page of clothing retailer Asos, where obviously some not-yet-well-working chatbots were taking care of customer inquiries. Unfortunately, the bots were interacting in a confused way with the customers. What initially started with a customer inquiry about lost items by an Asos courier turned into a Facebook discussion about the company using chatbots, with customers getting questions about their reference numbers and enquiries without even having an enquiry. Asos denied later via Twitter that the answers were chatbot-generated, which, if it were true, would be evidence of incapability of customer services employees.
Another example for a chatbot without monitoring was the story of Microsoft’s artificial intelligence chatbot Tay, which was supposed to talk like millennials and was able to learn from human beings. But Tay didn’t get the right teachers. Soon, the chatbot spread racist comments, quoted Hitler and supported Donald Trump’s immigration policy. In other words: The best ‘chat roboter’ is only as good as the human behind it. The chatbot can, in a glimpse, learn how to respond appropriately to users’ questions, but it can only improve when being supervised by a human being.
Integrated customer service platform
The volume of escalated transactions needs to be monitored and analysed. Obviously, you would want to evaluate the quality of existing bot behaviour, but additionally you would like to identify typical patterns that allow you to create new ‘bot rules’ and drive down escalation rate.
So how do you achieve this? Bots are great, but even greater once they’re part of an integrated customer service platform. Clearly, all human agents need to be empowered to fully follow and understand transaction history and create a seamless customer interaction experience. This isn’t the only reason, however, for tracking and monitoring all transactions, as you would also want to enable the traditional customer service agent to be part of the bot training process.
In case your bot cannot come up with a satisfactory confidence level of the suggested reaction, a human agent will validate whether the suggested response is OK, or may decide for an automated alternative, or may even come up with a resolution on his or her own. This way, you safeguard not only the quality of response and resolution, but you also train the bot for future transactions of a similar type.
Evidently, this calls for a system that manages best possible user experience on the consumer level, and at the same time employs best-in-class efficiency on the operations side. As many traditional customer engagement tools have origins in the social media ecosystem, where efficiency was one of many objectives, it seems like we need to create a new kind of solution to accommodate user experience as well as efficient and highly productive agent interfaces.
By integrating bots into your customer service strategy and platform, you can avoid getting lost in translation, like Bill Murray in Tokyo, and instead create your ghostbuster strategy to control and manage bots and not let them turn into “transaction monsters”.
This article was first published on banknxt.com.