Research

AGENT: A Benchmark for Core Psychological Reasoning

ICML

Authors

Published on

02/24/2021

For machine agents to successfully interact with humans in real-world settings, they will need to develop an understanding of human mental life. Intuitive psychology, the ability to reason about hidden mental variables that drive observable actions, comes naturally to people: even pre-verbal infants can tell agents from objects, expecting agents to act efficiently to achieve goals given constraints. Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning. Inspired by cognitive development studies on intuitive psychology, we present a benchmark consisting of a large dataset of procedurally generated 3D animations, AGENT (Action, Goal, Efficiency, coNstraint, uTility), structured around four scenarios (goal preferences, action efficiency, unobserved constraints, and cost-reward trade-offs) that probe key concepts of core intuitive psychology. We validate AGENT with human-ratings, propose an evaluation protocol emphasizing generalization, and compare two strong baselines built on Bayesian inverse planning and a Theory of Mind neural network. Our results suggest that to pass the designed tests of core intuitive psychology at human levels, a model must acquire or have built-in representations of how agents plan, combining utility computations and core knowledge of objects and physics.

This paper has been published at ICML 2021

Please cite our work using the BibTeX below.

@misc{shu2021agent,
      title={AGENT: A Benchmark for Core Psychological Reasoning}, 
      author={Tianmin Shu and Abhishek Bhandwaldar and Chuang Gan and Kevin A. Smith and Shari Liu and Dan Gutfreund and Elizabeth Spelke and Joshua B. Tenenbaum and Tomer D. Ullman},
      year={2021},
      eprint={2102.12321},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
Close Modal