I had an experience recently where a reporter and I were discussing a technology which he asserted did not use AI. I stopped him for a moment and asked, “How do you define Artificial Intelligence?” This was always one of my favorite questions to ask back when I was a professor. I would ask it right at the beginning of a semester, when all the students knew was their preconceptions from watching Star Wars and reading science fiction. It gave us a starting point for the next 14 weeks of discussions. It also established that the question is not just about what is incorporated into this field of science – it is also about what, philosophically, defines ‘intelligence’ for the individual. Because of the impassioned debates that would ensue (and my die-hard Liberal Arts major personal philosophies about teaching and learning), that course was one of my favorites to teach.
But this particular time asking the question became the first of what has lately been a common experience for me: hearing someone assert that Artificial Intelligence is equal to Machine Learning – that, for many individuals, this is the only part of the science that they’ve ever heard about. And so I would like to put the academic robes back on for just a moment and give a quick overview about what the rest of Professor Michaud’s AI course used to cover.
I used to start with Logic: first, a symbolic language for representing information about the world, and second, a way to systematically leverage that information to come to new conclusions. For example, maybe I could program an AI to know that Aspect employees are passionate about excellence in customer engagement. I could also tell the AI that I work at Aspect. But unless it can apply the logical reasoning step known as Modus Ponens, it will not be able to conclude anything about my passions. However, if we do build those capabilities into an AI, it can make “common sense” deductions that are more than JUST the facts it is presented with.
After establishing the basics for reasoning, we would address how to set up an AI with the knowledge of what is currently true about a situation, what it wants to be true, and how it could systematically explore solutions for how to transform a current state into a desired state. Now our AI could do more than reason; it could solve problems (sometimes efficiently, sometimes optimally, and ideally both).
When we covered Expert Systems and Case-Based Reasoning, we learned how an AI could be constructed to give advice, diagnose problems, or apply known solutions to new situations. When we covered Knowledge Representation, we talked about how some facts need to be represented in terms of the complex interrelationships between concepts in the world. Then for a while we dove into my particular specialty, Natural Language Processing – and realized the huge complexity of tackling a “real world” problem in which the facts were not unambiguous and the rules not necessarily universal.
All along our journey, we would return to a repeated theme in AI: that it is really difficult to lay the basis for this kind decision-making, which always requires that we first encode human knowledge in some way. This led us, finally, to discuss machine learning. It is a powerful and effective alternative to engineering representations of rules and facts, in which an AI observes examples and derives its own rules from those examples.
But none of the AI algorithms is a panacea. Even if an AI is learning, it still must be taught. Supervised machine learning still entails a massive up-front effort preparing examples with human judgments from which to learn. Semi-supervised methods try to lessen this burden but introduce more risk of noise. Unsupervised methods rely on the machine’s ability to make choices, possibly wrong ones, and to receive feedback after the fact in order to learn over time which choices are right and which are wrong. For reliable results, human expertise is always a part of the equation, and the future of AI probably lies with hybridized solutions – combining initial human effort and autonomous exploration – as the popularity of bootstrapping and semi-supervised methods shows.
To return to how this started, however, the bottom line is: There are more things in heaven and earth (and Artificial Intelligence), Horatio, than are dreamt of in your philosophy. It is a vast field that has been around almost as long as the computer itself, and there may be no limit to what it can do to continue to improve how we use machines to accomplish the tasks we need to do. It is certainly a field that readily captures the attention and the imagination of people from outside of technology, and it will likely continue to do so – always bridging the reality of today’s technology with the fantasy of what our imagination says it will soon be able to do.
Latest posts by Lisa Michaud (see all)
- CXP 17: Leveraging Natural Language Understanding for Self-Service Chatbots - June 19, 2017
- Je ne parle pas le français: Why We Need Multilingual Bots - April 17, 2017
- Data Doesn’t Lie – But it Doesn’t Tell the Whole Truth - January 24, 2017