Research

3DP3: 3D Scene Perception via Probabilistic Programming

NeurIPS

Authors

We present 3DP3, a framework for inverse graphics that uses inference in a structured generative model of objects, scenes, and images. 3DP3 uses (i) voxel models to represent the 3D shape of objects, (ii) hierarchical scene graphs to decompose scenes into objects and the contacts between them, and (iii) depth image likelihoods based on real-time graphics. Given an observed RGB-D image, 3DP3’s inference algorithm infers the underlying latent 3D scene, including the object poses and a parsimonious joint parametrization of these poses, using fast bottom-up pose proposals, novel involutive MCMC updates of the scene graph structure, and, optionally, neural object detectors and pose estimators. We show that 3DP3 enables scene understanding that is aware of 3D shape, occlusion, and contact structure. Our results demonstrate that 3DP3 is more accurate at 6DoF object pose estimation from real images than deep learning baselines and shows better generalization to challenging scenes with novel viewpoints, contact, and partial observability.

This paper has been published at NeurIPS 2021

Please cite our work using the BibTeX below.

@misc{gothoskar20213dp3,
      title={3DP3: 3D Scene Perception via Probabilistic Programming}, 
      author={Nishad Gothoskar and Marco Cusumano-Towner and Ben Zinberg and Matin Ghavamizadeh and Falk Pollok and Austin Garrett and Joshua B. Tenenbaum and Dan Gutfreund and Vikash K. Mansinghka},
      year={2021},
      eprint={2111.00312},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Close Modal