The Future of ITR, Part III: Dealing with Natural Language Ambiguity


Human natural language is ambiguous; this forms the basis of many cultures’ sense of humor, as well as laying the foundation for interactions spanning the gamut from metaphor to double entendre. Resolving ambiguity in order to understand the intent of the speaker is a difficult task – difficult enough that human users of language sometimes fail at it, without which we would be absent many sitcom episode storylines. When dialogue systems deal with the ambiguous utterances of human users in an automated self-help environment, a model of context becomes an essential component of the understanding process.

Lexical ambiguity refers to when a given word has more than one meaning. There are hundreds of ambiguous words in English; for example, the word bank could be a place to store money, the side of a river, or what you do with a complex pool shot. In many cases, the domain of a conversation provides the necessary disambiguation:

Ambiguity 1

It is highly unlikely that a user ordering pizza wants to see it decorated with an abundance of silly 1980s pop references. In well-defined task domains, disambiguation can proceed from the knowledge that the domain of the task provides a semantic neighborhood constraining the interpretation of an item. In open-ended tasks, the utterance itself provides a neighborhood; in the absence of any other context, the presence of the word pizza in that sentence can constrain the interpretation of cheese, because the non-food interpretation can be seen as too semantically distant from pizza to be plausible.

A slightly harder task is understanding the natural use of pronouns. Users of English employ the pronouns he, she, it, and they in situations when the entity or entities to which the pronoun refers should be clear. Determining what a pronoun means is called pronoun resolution. A simple example might be in the domain of reviewing a hotel reservation:

Ambiguity 2

In this case, the reservation is the entity in focus; it is a trivial matter to resolve the pronoun it. Imagine, however, this interaction with a personal digital assistant:

Ambiguity 3

There are two entities which the speaker could be referring to as it: the meeting and the account. However, accounts cannot be rescheduled; meetings can, and this additional semantic information allows us to decode the sentence. If the reply had been, “Who is the AE on it?” the best interpretation would be that it means the ABCD account, because accounts have account executives and meetings do not. This can be represented computationally in a semantic network that defines the relationships between concepts and their attributes. Even harder would be this interchange:

Ambiguity 4

Resolving the pronoun him requires the advanced knowledge that the scheduler of the appointment would be the most appropriate person to inform about a late arrival.

Possibly the most difficult ambiguity to resolve is syntactic, which frequently involves phrases that work as modifiers and determining which part of the sentence they are modifying. For example:

Ambiguity 5

Is the speaker asking the assistant to send her a reminder on Monday (when the office would be open and a call can be made), or to set up a reminder right away for altering an appointment that is occurring on Monday? That cannot be resolved with any certainty.

Resolving ambiguity typically involves the following steps: (1) Determine all possible interpretations of the sentence. (2) Leverage as much context as possible to rank these interpretations according to their likelihood.  This context can be from the sentence itself, from the domain, from world knowledge, or even from user data from a resource such as CRM. (3) If there is a clearly superior choice, select that interpretation; otherwise, ask a clarifying question. While in text interfaces we want to minimize dialogue turns because they slow down the interaction, it is still important that we balance efficiency against accuracy. After all, we don’t want our interactions with dialogue systems to be fodder for any new sitcom episodes.



Lisa Michaud

Lisa Michaud is the Director of Natural Language Processing (NLP) at Aspect. She has been centrally involved in the integration of NLP components into Aspect’s product suite for customer engagement and the architecting of our Interactive Text Response (chatbot) technology. She has 20 years of research experience in the field of Natural Language Processing / Computational Linguistics and pursues diverse interests in user modeling, dialogue, parsing, generation, and the analysis of non-grammatical text.She holds a PhD in Computer Science and has been published in multiple international journals, workshops, and conferences in the fields of user-adaptive interaction and Computational Linguistics.

2 thoughts on “The Future of ITR, Part III: Dealing with Natural Language Ambiguity

  1. Cool summary Lisa!

    Allow me to just add that there is also ambiguity in “January 18” and “January 19”. A software that would make an appointment will at some point need to tie that down to “January 19 in year X”.

    Kurt “Time Flies Like a” Thomas

    1. Time references are very challenging in general, although if no year is specified it is FAIRLY safe to assume the current calendar year. It gets trickier with “next Tuesday.” W hen it is currently Monday, native English speakers seem to be divided 50/50 on whether “next Tuesday” is the very next Tuesday to occur or the Tuesday of next week. Same with “next weekend.”

Comments are closed.