It is a measure of how much we take sophisticated technology for granted that the appearance of a pop-up chatbot screen, asking questions and providing sensible responses, is no longer considered remarkable.
Chatbots today inhabit websites, intranets, apps, and social media platforms, and have become so ubiquitous as to become almost invisible. Interacting with a text screen is a natural activity, and most users don’t seem to care much about whether the other side of the conversation is a human or a bundle of code.
From a corporate perspective, chatbots can be a win/win. Increasingly reliable in their responses and cheap to operate, they are available night and day and are instantly scalable. Whether your site or app has one visitor a day or thousands, the bot is always eager to help.
We’ve come a long way from the disastrous early attempts at providing AI assistance – remember Microsoft’s paperclip? – but what a good chatbot does today is much the same as that much-loathed animated character: identify what a user is trying to do, and offer appropriate help.
You will have heard of the Turing Test, which held that if a computer could provide responses that were indistinguishable from those of a human, the machine had to be considered intelligent. Are we there yet?
Despite some well-publicised claims, the answer is still, probably, no. In 2014 a program called Eugene Goostman successfully tricked Turing Test judges into believing it was a 13-year-old Ukrainian boy. This controversial victory is still a failure, though, because nobody expects or wants to find an adolescent behind a real-world help screen.
More relevant is the Loebner Prize for the most convincing chatbot. This awards bronze medals each year to the best contenders, but has never made a silver (text) or gold (audio visual) award – the equivalent of a Turing Test pass.
A glance at the best entries from the 2018 competition shows why. Eleven bots were asked 20 questions, winning two points for a human-like answer and one for a plausible response. Out of a maximum of 40 points, the winner scored 27 and the lowest just 12.
Even simple questions can make the tech fall over. The winner, a chatbot called Tutor by Ron C Lee, answered “Do you know how to make toast?” with “No, we haven’t”.
While there remain limits on what a chatbot can convincingly do, this need not be a problem if it is deployed in the right way. Recent research from Penn State University found that while many appreciate an apparently empathetic response from a bot, those who believe machines are actually capable of consciousness do not.
“The majority of people do not believe in machine emotion, so took expressions of empathy and sympathy as courtesies,” said researcher Bingjie Liu. “However, people who think it’s possible that machines could have emotions had negative reactions from the chatbots.”
The answer is only to use them for things they are good at, says James Williams, who leads the development of advanced chatbots with Nottingham-based software company MHR. While chatbots are now common in consumer interfaces, he notes, there is much potential in the enterprise space.
When applied within the company’s flagship human resources (HR) software, Williams says the conversational interface is an excellent way to simplify common transactions. “You’ll hear us talk a lot about reducing friction,” he says, which means anything that slows down a routine interaction.
An example is an employee submitting an expenses claim, which MHR’s Talksuite does through an AI-driven chatbot. “Taking a picture of a receipt is a natural thing to do, and the AI will recognise the image, understanding the content as well as the context. Bots are really good for processes with lots of rules or lots of steps, and here it just asks a few questions and saves the employee a lot of hassle. Less friction.”
Knowing when not to deploy a bot can be just as valuable. Williams recounts one client which had deployed a complex chatbot for its newly joining employees, known in HR circles as the onboarding process. “The chatbot went through everything plus the kitchen sink, so the employee was there for 20 minutes or more being interrogated by a machine. It was just awful. A web-based form is a much better interface in this situation.”
His final advice is to consider the image the bot projects. “Any personality in a chatbot tends to come accidentally, unlike a website or an app. If you let software developers write the conversation, you might end up with a bot that’s actually a bit of a dick. People make judgements on things like language and punctuation. It’s fine to be personable and friendly, but it should be clear when the user is talking to a bot and when any transition to a human interaction takes place.”