On Hold: How Waiting For an Answer Affects the Choice of Question
Interactive Task Learning
Add to Google Calendar
To make a good decision, sometimes a computational agent needs to acquire helpful information from another agent. For example, if the agent is acting on behalf of a human user, the agent might fulfill the user's wishes better if it could ask the user clarifying questions. However, since answering questions can incur cost (such as distracting the user from other important tasks), fundamental questions arise in how the agent should decide what question(s), if any, are most worthwhile to ask. I will summarize some of our work in this area (jointly done with Satinder Singh and Rob Cohn). In particular, I will focus on challenges that arise when, because of communication delays or competition for attention, answers to questions might be delayed. In such situations, the agent needs to consider what question to ask *now* whose answer may in expectation provide useful information *later*, depending on what the agent decides to do while waiting for the answer.
The goal of my research is to develop a general cognitive architecture that supports human-level behavior. One capability of humans that is missing in artificial agents is interactive task learning, where an agent learns new tasks from natural interactions with a human instructor. This contrasts with approaches where an agent already has some internal representation of the task, but learns how to do the task well, as well as approaches that rely on other modalities such as demonstration that restrict the complexity of task that can be learned. Interactive task learning requires the integration of many areas of AI including natural language processing, dialog management, object recognition and perception, actuation, language and concept grounding, spatial reasoning, knowledge representation, and general problem solving. Our approach builds on prior research on cognitive architecture (Soar) that provides the necessary representation, processing, and learning mechanisms. Our approach emphasizes mixed initiative interaction, where the human provides advice and information, and the agent actively asks questions to acquire the knowledge it needs. Moreover, the agent learns by being situated in the task with the instructor, and it attempts to perform the task as it gains knowledge.
Ed Durfee is a Professor of Computer Science and Engineering, and of Information, at the University of Michigan, where he has served on the faculty for over 25 years. He conducts research on coordinating activities in multiagent systems. He is a Fellow of AAAI and of IEEE.
John E. Laird is the John L. Tishman Professor of Engineering at the University of Michigan, where he has been since 1986. He received his Ph.D. in Computer Science from Carnegie Mellon University in 1983 working with Allen Newell. From 1984 to 1986, he was a member of research staff at Xerox Palo Alto Research Center. He is one of the original developers of the Soar architecture and leads its continued evolution. He was a founder of Soar Technology, Inc. and he is a Fellow of AAAI, AAAS, ACM, and the Cognitive Science Society.