Learning Rational Subgoals from Demonstrations and Instructions



Published on




We present a framework for learning useful subgoals that support efficient long-term planning to achieve novel goals. At the core of our framework is a collection of rational subgoals (RSGs), which are essentially binary classifiers over the environmental states. RSGs can be learned from weakly-annotated data, in the form of unsegmented demonstration trajectories, paired with abstract task descriptions, which are composed of terms initially unknown to the agent (e.g., collect-wood then craft-boat then go-across-river). Our framework also discovers dependencies between RSGs, e.g., the task collect-wood is a helpful subgoal for the task craft-boat. Given a goal description, the learned subgoals and the derived dependencies facilitate off-the-shelf planning algorithms, such as A ∗ and RRT, by setting helpful subgoals as waypoints to the planner, which significantly improves performance-time efficiency. Project page:

This work was presented at AAAI 2023.

Please cite our work using the BibTeX below.

  title={Learning Rational Subgoals from Demonstrations and Instructions},
  author={Luo, Zhezheng and Mao, Jiayuan and Wu, Jiajun and Lozano-Pérez, Tomás and Tenenbaum, Joshua B and Kaelbling, Leslie Pack},
  booktitle={Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence},
Close Modal