A unifying framework for expectation-aware AI planning
Authors
Authors
- Sarath Sreedharan
- Tathagata Chakraborti
- Christian Muise
- Subbarao Kambhampati
Authors
- Sarath Sreedharan
- Tathagata Chakraborti
- Christian Muise
- Subbarao Kambhampati
Published on
03/18/2019
Categories
One of the primary challenges in interfacing systems with end users is dealing with incorrect expectations or mental models of the users. This means that even if they system is making the best decisions in its own model, it will appear to be inexplicable to the observer. In our 2020 AAAI workshop paper entitled, Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning, we show how an agent can incorporate the mental model of the user inside its own planning problem and prove that this is theoretically no harder than the original planning task. This allows an agent to generate self-explanatory plans when there is an observer in the loop.
A primer on AI planning
Classical planning is the task of finding a procedural course of action for a declaratively described system to reach its goals while optimizing overall performance measures. Given the description of a task, automated planners find a sequence of operators or a plan that can transform the current state to the desired end. Put simply, planning is the answer to the question “What should I do next?” An autonomous agent must be able to answer this question to function in its environment. This was among the first tasks posed during the formulation of the notion of “Artificial Intelligence” more than half a century ago and to this date remains a largely unsolved core AI competency due to the complexity of the task both in terms of representation as well as reasoning.
Epistemic planning, as Bolander explains, is “the enrichment of classical planning with epistemic notions, that is, with knowledge and beliefs. Epistemic planning can naturally be seen as the combination of automated planning with epistemic logic, relying on ideas, concepts and solutions from both areas.” The human or robot can use epistemic aspects to reason about each others beliefs which can be arbitrarily nested to anticipate how the other will react.
Model Space Search is the state of the art method for dealing with model differences in planning by performing a search through the space of differences between the model of the planner and that of the end user to figure out the best model to plan with and anticipate how the agent’s decision will be evaluated in that model. This is different from classical planning in which the search is in the space of states the agent might be in. In model space, every state is a classical planning problem.
Managing expectations
Our paper considers the general setting of an autonomous agent coming up with plans to achieve some goal in the presence of an external observer who may have an incorrect understanding of the agent’s capabilities. This could arise in many cases where a human may be supervising or working with the agent to achieve some goal.
For example, in a warehouse setting, a supervisor may be extremely confused if a package fetching robot chooses to take a circuitous route when a shorter path exists (which unbeknownst to the supervisor, the robot can’t use) and worse still the supervisor may have made arrangements expecting the robot would take that route.
The central contribution of our paper is the proposal of a unified framework showing that planning with the external expectation is theoretically no harder than just solving the classical problem in a single model.
We show how the generic human-aware planning problem can then be compiled down into just another new planning problem. This allows us to leverage highly optimized algorithms developed over half a century for the generic planning problem. We connect this approach to an extensively studied branch of AI called epistemic planning, thereby providing a concrete theoretical foundation for any future extensions of the work.
Apart from this computational advantage, this compilation also enables us to model new human-aware behavior not possible with the state of the art methods in this category, such as planning for implicit updates to the mental model of the observer by means of the agent behavior, in addition to explicit updates to the mental model via communication and explanation. This is particularly exciting.
How it works
The computationally hard model space search process can be avoided by a compilation into the classical planning problem leveraging the representation from a restrictive subclass of epistemic planning problems.
We can do this by generating a planning problem where for each action the agent can perform considers both what the agent thinks will happen when that action is done as well as what observer expects will happen. We equip the compiled agent model with new action called “explanatory actions” that can modify the observer’s mental model when done. This may be due to the observer seeing that action being done or the action modeling an explicit communication action. Plans computed using this compilation become self-explanatory.
The biggest surprise is perhaps in the fact that we were able to compile the expectation-aware planning problem into a classical planning problem. This allowed us to have a complete algorithm, expand on the behaviors afforded by mental modeling, while at the same time achieve major computational advantage with respect to the state of the art.
Summary
In relation to existing works that have attempted this, our algorithm is the first of its kind that can handle both ontic (ontological) as well as epistemic effects on the mental model through communication and through observation, thereby opening up new behavior patterns (such as implicit explanations through behavior) not allowed by the state of the art in human-aware planning and explainable AI. This is in addition to a large scale-up in computation afforded by the proposed approach which circumvents the need to search in the space of model differences between the agent and the user by leveraging a compilation of epistemic planning to classical planning.
Our work provides a unified framework of incorporating the mental model of the end user or an observer into the decision making task of an autonomous agent. We hope this will spur the principled design of human-aware agents that can account for how their decisions will be perceived by humans in the course of their deliberative processes, particularly in the context of automated AI planning where the end-user models have traditionally been ignored in research.
While the planning community has looked at some of these issues in isolation, our proposed framework unifies several subfields in human-aware planning, explainable AI planning (XAIP), and epistemic planning, and will thus hopefully motivate and accelerate further research in this direction through a more concerted effort.
Please cite our work using the BibTeX below.
@inproceedings{exact2020,
title = {{Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning}},
author = {Sarath Sreedharan and Tathagata Chakraborti and Christian Muise and Subbarao Kambhampati},
booktitle = {AAAI},
year = {2020}}