Back in April 2016, I wrote about how Edward came to life: a “virtual host” at the Radisson Blu Edwardian that is serving an ever-growing number of guests at this prestigious hotel chain and has caused an NPS score increase of 13 points. A lot has happened since, and we learned a lot along the way, too. I’d like to take some time and talk about another bot that is planned to see the lights of life soon: a customer service bot on Facebook Messenger for a car maker, answering questions for prospective customers and existing drivers alike. Since the solution is not live yet, we will call them “Monument Car” in this piece.
We Made a Key Design Decision: A Bot Without a Name
We decided not to give this bot a name or design a persona for now. Different design considerations go into creating a personality for a chatbot so we decided to focus on function for this new system. To accomplish this we had to eliminate all mentions of first person singular and use first person plural instead. In some cases that was easy to do, especially when the bot was directly answering a particular question about Monument’s products. However, there were other cases where that required some more thinking. With first person singular, you can easily express that a message was not understood: “Sorry, I missed that, could you rephrase?”. Converting that to “we” would sound odd. We came up with neutral sentences, variations of elliptic phrases such as “Not sure what you mean. Can you please restate that?” Why variations? A common technique to avoid coming across as too robotic is random prompting: designing multiple variations of essentially the same message, picked at random during the conversation.
Here is an example flow:
The follow-up question “Does this answer your question” is also using the random prompting technique. The bot is smart enough not to insist on an answer, though. If the customer moves on to the next question, the bot will answer that just fine.
As you can see at the end, pleasantries were planned for (and even some answers to questions such as “how old are you”, or “who built you?”.) Our experience with Edward showed us that quite a number of people do use typical conversation markers such as “hello”, “thanks”, or “goodbye”. In the case of Edward it was 14% of all messages that came in! The bot can therefore handle a variety of such messages, so it doesn’t fall over when the customer sends “cool!” after reading about an impressive feature of a Monument car.
We Converted FAQs from a Website into the Chat Medium
One of the key insights we conveyed to our customer as we kicked off this project was that taking existing content and applying it to a new medium as-is could mean sub-optimal results. For example, websites by nature are media-rich environments that allow for a high-fidelity display of information. There is no boundary to the amount of information you can convey, nor to the format. However, chat is quite the opposite: you are operating in a medium that is based on the idea of a conversation, and constrains the information throughput that can be achieved at a time. Copying & pasting content from the website into a messaging bubble is not recommended.
A key phase of the project therefore was the message and conversation interaction design. Neutral formulations such as “Customers can register here:” had to be converted to second person singular: ”You can register here:”; longer messages – and they couldn’t be avoided given the subject matter – had to be split up into messages under 320 characters each, to fit into the constraints of Facebook Messenger.
This is a process that will continue over time. By using interaction patterns of real users vs. beta test users, we can better tailor the answers to the needs of the end customers interacting with the bot. It is important to note that the bot still frequently leads the customer over to a website with more information on a topic. Over time we want to incorporate business logic and content available on the website today within the bot itself, so that the resulting experience is less fragmented.
We Helped It Understand Normal, Natural Language
A key component of the bot is that it understands context and natural language. Many of the questions that were formulated in the original FAQs as full questions would likely never be phrased as such by real people. An FAQ such as “What are the technical requirements for a vehicle to use Monument Online?” might be phrased by a real person like this: “What does my car need so it can use Monument Online” – or in 500 other ways. Language knows no boundaries.
Our solution uses Aspect NLU to help with the disambiguation of similar questions and the successful extraction of meaning and intent. It took about four person-weeks to code the rules for answering ~100 questions with a level of accuracy that satisfied our customer. The effort invested during this phase of the project will pay off quickly, as we can deflect calls from the contact center and help some customers make buying decisions. Furthermore, once the rules are created, they can immediately understand the 13 other languages Aspect NLU supports, thanks to its interlingua approach – and Monument has big plans in terms of multi-language support for this new customer service offering.
Since chat is inherently conversational, we had to accommodate for incomplete sentences and elliptic follow-up questions, as they are common in our everyday use of language.
Consider the following four FAQs the bot covers:
- What is Concierge Service?
- How can I contact the Concierge Service?
- What is the price of the Concierge Service?
- Can my partner or a co-user use the Concierge Service?
During a conversation, people start to use pronouns to refer to previously introduced concepts. So once the context of “concierge service” is established, a follow-up question might be “how much is it?”. It is thus crucial to maintain context to be able to answer questions such as “how much is it?” or “can my wife use it?” – What is it? Or, as a linguist would ask: Which antecedent does the pronoun refer to?
Sometimes even the exact same question can yield different answers, depending on the flow of the conversation. Context awareness is critical for a chatbot. Without it, users will read “I’m sorry, not sure what you just asked” a little too often… a mistake made frequently with the early bot implementations we saw in 2016.
We Planned for Human Backup
A chatbot is almost like a new hire in the contact center, or on the store’s show floor. The first day they’re in front of real customers more often than not they will need help from a colleague to answer questions. It’s not much different with a chatbot. But no matter how good the design and the testing, it was crucial to have human backup in place. In the case of Monument, their social media team was already engaging with customers around the globe from their support center in Europe. To integrate the bot with the care team, we had to implement a technical integration with their existing social media platform. In close cooperation with their vendor, we coupled our APIs and built new workflows that allow for a smooth handover. When the bot cannot make sense out of the customer message, it now says “Missed that again, my apologies! What we can do at this point is pull a live agent in so they can help you here. You would need to wait a few minutes. Would you like to talk to a live agent now?” – gracefully recovering the situation and still providing great service to the customer.
The proof-of-concept has been a success and has been officially accepted by the customer. Once we are live with this implementation and are gaining more insight into usage and performance, we will be able to share more. Stay tuned!
Latest posts by Tobias Goebel (see all)